Abstract

Large-scale detection problems, where the number of hypotheses is exponentially large, characterize many important sensor network applications. In such applications, sensors whose output is simultaneously affected by multiple target locations in the environment pose a significant computational challenge. Conditioned on such sensor measurements, separate target locations become dependent, requiring computationally expensive joint detection. Therefore there exists a tradeoff between the computational complexity and accuracy of detection. In this paper we demonstrate that this tradeoff can be altered by collecting additional sensor measurements, enabling algorithms that are both accurate and computationally efficient. We draw the insight for this tradeoff from our work on the sensing capacity of sensor networks, a quantity analogous to the channel capacity in communications. To demonstrate this tradeoff, we apply sequential decoding algorithms to a large-scale detection problem using a realistic infrared temperature sensor model and real experimental data. We explore the tradeoff between the number of sensor measurements, accuracy, and computational complexity. For a sufficient number of sensor measurements, we demonstrate that sequential decoding algorithms have sharp empirical performance transitions, becoming both computationally efficient and accurate. We provide extensive comparisons with belief propagation and a simple heuristic algorithm. For a temperature sensing application, we empirically demonstrate that given sufficient sensor measurements, belief propagation has exponential complexity and sequential decoding has linear complexity in sensor field of view. Despite this disparity in complexity, sequential decoding was significantly more accurate.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call