Abstract
A detection problem in sensor networks is considered, where the sensor nodes are placed on a line and receive partial information about their environment. The nodes transmit a summary of their observations over a noisy communication channel to a fusion center for the purpose of detection. The observations at the sensors are samples of a spatial stochastic process, which is one of two possible signals corrupted by Gaussian noise. Two cases are considered: one where the signal is deterministic under each hypothesis, and the other where the signal is a correlated Gaussian process under each hypothesis. The nodes are assumed to be subject to a power density constraint, i.e., the power per unit distance is fixed, so that the power per node decreases linearly with the node density. Under these constraints, the central question that is addressed is: how dense should the sensor array be, i.e., is it better to use a few high-cost, high-power nodes or to have many low-cost, low-power nodes? An answer to this question is obtained by resorting to an asymptotic analysis where the number of nodes is large. In this asymptotic regime, the Gaumlrtner-Ellis theorem and similar large-deviation theory results are used to study the impact of node density on system performance. For the deterministic signal case, it is shown that performance improves monotonically with sensor density. For the stochastic signal case, a finite sensor density is shown to be optimal
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.