
Streaming anomaly detection on resource-constrained edge devices demands algorithms whose memory is fixed regardless of sequence length while still adapting to distribution shift. Existing online methods face two structural obstacles: (i) memory that grows with the observation count, and (ii) "adaptive collapse"—an irreversible failure mode in which skip-on-anomaly update policies freeze score statistics during prolonged abnormal segments, leaving stale thresholds and persistent false alarms long after normal conditions resume. We propose the Hyperdimensional Transform Anomaly Detector (HDT-AD), which encodes sliding windows into real-valued hypervectors via a triangular-kernel Hyperdimensional Transform encoder with position binding and scores them against a single prototype updated by Exponentially Weighted Moving Averages—all in $O((W+3)D)$ constant memory. The core algorithmic contribution is a "robust update rule" that unconditionally updates score statistics at every time step while skipping prototype updates on detected anomalies, thereby preventing threshold freezing. We formalise recovery by proving an exponential convergence bound on the EWMA statistics estimator (recovery time $T_{\mathrm{rec}} \le \lceil \ln(\Delta/\varepsilon)/\rho \rceil$) and establish a concentration inequality for the finite-dimensional kernel approximation error of the HDT encoder. Evaluation on 59 industrial and synthetic datasets (Numenta Anomaly Benchmark and Skoltech Anomaly Benchmark) shows that HDT-AD maintains a 1.26 MB peak heap footprint at $N=100,000$ ($68 \times$ smaller than the $k$-nearest neighbour baseline). In fixed-configuration evaluation, HDT-AD achieves seven times higher mean F1 than EWMA Z-score ($0.21$ vs $0.03$); under per-model tuning with paired Wilcoxon tests, EWMA improves substantially (mean F1 0.273) and the two methods show no significant difference ($p=0.65$, Holm-corrected), while HDT-AD remains significantly better than Half-Space Trees. Collapse stress tests confirm immediate recovery (post-recovery false positive rate $\approx 10^{-4}$, 0-window delay) versus permanent failure under the freeze-all policy. A direct comparison with EXPoSE—the most closely related constant-memory streaming baseline—demonstrates $60 \times$ higher mean F1 ($p=8.7 \times 10^{-11}$, 55/59 wins). A C implementation delivers $2.2$–$7.2 \times$ speedup (mean $3.4 \times$) over the Python reference, supporting real-time edge deployment.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
