Abstract

The problem of sequentially detecting the emergence of a moving anomaly in a sensor network is studied. In the setting considered, the data-generating distribution at each sensor can alternate between a nonanomalous distribution and an anomalous distribution. Initially, the observations of each sensor are generated according to its associated nonanomalous distribution. At some unknown but deterministic time instant, a moving anomaly emerges in the network. It is assumed that the number as well as the identity of the sensors affected by the anomaly may vary with time. While a sensor is affected, it generates observations according to its corresponding anomalous distribution. The goal of this work is to design detection procedures to detect the emergence of such a moving anomaly as quickly as possible, subject to constraints on the frequency of false alarms. The problem is studied in a quickest change detection framework where it is assumed that the spatial evolution of the anomaly over time is unknown but deterministic. We modify the worst-path detection delay metric introduced in prior work on moving anomaly detection to consider the case of a moving anomaly of varying size. We then establish that a weighted dynamic cumulative sum type test is first-order asymptotically optimal under a delay-false alarm formulation for the proposed worst-path delay as the mean time to false alarm goes to infinity. We conclude by presenting numerical simulations to validate our theoretical analysis.

Highlights

  • In anomaly detection studied under the quickest change detection (QCD) framework, the emergence of an anomaly in the system is assumed to induce a change in the datagenerating distribution of the observations obtained by the sensors monitoring the system

  • The goal is to design a detection algorithm, in the form of a stopping time, to detect this change in distribution as quickly as possible, subject to constraints on the frequency of false alarm (FA) events. This tradeoff is posed in a stochastic optimization framework, the solution to which depends on the definition of the delay and FA metrics, on the changepoint model, as well as on the underlying statistical observation model

  • Thereafter, observations are generated i.i.d. according to a known anomalous distribution. This QCD setting, often referred to as the i.i.d. model, has been mostly studied under two formulations: 1) the minimax setting [2]–[5], where the changepoint is modeled as unknown but deterministic and the goal is to minimize a worst average detection delay (WADD) subject to a constraint on the mean time to false alarm (MTFA); 2) the Bayesian setting [6], [7], where the changepoint is a random variable of known probability distribution and the goal is to minimize an average detection delay subject to constraints on the probability of FA

Read more

Summary

Introduction

In anomaly detection studied under the quickest change detection (QCD) framework, the emergence of an anomaly in the system is assumed to induce a change in the datagenerating distribution of the observations obtained by the sensors monitoring the system. The goal is to design a detection algorithm, in the form of a stopping time, to detect this change in distribution as quickly as possible, subject to constraints on the frequency of false alarm (FA) events This tradeoff is posed in a stochastic optimization framework, the solution to which depends on the definition of the delay and FA metrics, on the changepoint model, as well as on the underlying statistical observation model. Thereafter, observations are generated i.i.d. according to a known anomalous distribution This QCD setting, often referred to as the i.i.d. model, has been mostly studied under two formulations: 1) the minimax setting [2]–[5], where the changepoint is modeled as unknown but deterministic and the goal is to minimize a worst average detection delay (WADD) subject to a constraint on the mean time to false alarm (MTFA); 2) the Bayesian setting [6], [7], where the changepoint is a random variable of known probability distribution and the goal is to minimize an average detection delay subject to constraints on the probability of FA

Objectives
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.