Abstract

Collaboration in visual sensor networks (VSNs) is essential not only to compensate for the processing, sensing, energy, and bandwidth limitations of each sensor node but also to improve the accuracy and robustness of the network. In this paper, we study target localization in VSNs, a challenging computer vision problem because of two unique features of cameras, including the extremely higher data rate and the directional sensing characteristics with limited field of view. Traditionally, the problem is solved by localizing the targets at the intersections of the back-projected 2D cones of each target. However, the existence of visual occlusion among targets would generate many false alarms. In this work, instead of resolving the uncertainty about target existence at the intersections, we identify and study the non-occupied areas in the cone and generate the so-called certainty map of non-existence of targets. As a result, after fusing inputs from a set of sensor nodes, the unresolved regions on the certainty map would be the location of targets. This paper focuses on the design of a light-weight, energy-efficient, and robust solution where not only each camera node transmits a very limited amount of data but that a limited number of camera nodes is involved. We propose a dynamic itinerary for certainty map integration where the entire map is progressively clarified from sensor to sensor. When the confidence of the certainty map is satisfied, targets are localized at the remaining unresolved regions in the certainty map. Based on results obtained from both simulation and real experiments, the proposed progressive method shows effectiveness in detection accuracy as well as energy and bandwidth efficiency.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.