Skip to main content
← Risk Engine Validation & Stress Testing Report

Alert Latency Evidence

Production-benchmarked alert delivery latency for critical risk threshold notifications. Supports Test 4.2. For term definitions, see the Glossary. For benchmark conditions, see Data Provenance.

End-to-End Delivery Latency

50 consecutive runs on 2026-03-30 17:44-17:48 UTC. Each run processed 9 vaults and 50 users.

QuantileLatency
p505,624 ms
p906,342 ms
p998,475 ms

Alerts are reliably delivered within a ~6 second window end-to-end. This is the unoptimized baseline — no caching or query consolidation applied.

Benchmark Run Log

RunTimestamp (UTC)Total (ms)Send Wall (ms)
12026-03-30 17:44:095,8371,616
22026-03-30 17:44:155,4851,616
32026-03-30 17:44:216,2291,614
42026-03-30 17:44:265,3411,610
52026-03-30 17:44:325,4781,616
62026-03-30 17:44:408,4751,612
72026-03-30 17:44:465,6281,611
82026-03-30 17:44:515,4871,613
92026-03-30 17:44:586,2891,623
102026-03-30 17:45:046,0721,614
112026-03-30 17:45:105,7581,614
122026-03-30 17:45:155,6191,616
132026-03-30 17:45:215,4671,615
142026-03-30 17:45:276,4211,612
152026-03-30 17:45:346,7051,609
162026-03-30 17:45:405,8901,611
172026-03-30 17:45:455,7531,611
182026-03-30 17:45:526,3421,614
192026-03-30 17:45:575,3591,613
202026-03-30 17:46:035,7541,614
212026-03-30 17:46:096,0051,615
222026-03-30 17:46:155,8791,615
232026-03-30 17:46:216,0451,613
242026-03-30 17:46:275,6991,615
252026-03-30 17:46:325,3211,612
262026-03-30 17:46:385,8891,612
272026-03-30 17:46:435,3381,612
282026-03-30 17:46:495,7671,608
292026-03-30 17:46:545,2781,613
302026-03-30 17:47:016,4361,611
312026-03-30 17:47:065,6031,613
322026-03-30 17:47:136,2701,611
332026-03-30 17:47:185,2801,615
342026-03-30 17:47:235,1991,614
352026-03-30 17:47:295,7591,610
362026-03-30 17:47:345,2541,617
372026-03-30 17:47:395,3001,616
382026-03-30 17:47:455,3001,618
392026-03-30 17:47:515,9891,614
402026-03-30 17:47:565,1691,614
412026-03-30 17:48:015,0601,615
422026-03-30 17:48:065,1721,613
432026-03-30 17:48:126,0571,612
442026-03-30 17:48:185,5701,614
452026-03-30 17:48:246,1591,612
462026-03-30 17:48:295,2371,610
472026-03-30 17:48:355,5231,619
482026-03-30 17:48:405,0431,612
492026-03-30 17:48:455,5081,612
502026-03-30 17:48:515,4141,613

Send wall-clock time is extremely stable across all 50 runs (1,608-1,623 ms, CV < 0.3%). Total latency variance comes from metric computation, not delivery.

Phase Breakdown

PhaseWhat It DoesMedian
Load vaultsFetch vault metadata from database286 ms
Compute metricsVault score (geometric mean of market, protocol, oracle) + liquidation probability3,394 ms
Load subscriptionsAlert templates + user subscription configs289 ms
Load user dataActive users, preferences, delivery channels1 ms
Evaluate & sendDecision tree evaluation + Telegram delivery1,613 ms

Metric computation is the dominant phase (~60% of total time). All other phases combined account for ~2 seconds.

Interpretation

For institutional monitoring workflows — where the comparison point is dashboard polling intervals (typically 30-60 seconds) or email-based alerts (minutes) — sub-10-second delivery is well within acceptable bounds. Alert evaluation runs on each data snapshot cycle.