Abstract

This paper is to investigate the mobile object tracking in visual sensor networks. When visual sensors equipped with cameras are randomly deployed in a monitoring environment, many sensors are involved in covering the same mobile object. In a visual sensor network, images of the object may be captured by different sensors in different orientations simultaneously, and the captured images are then sent back to a base station or server. However, achieving full coverage for a set of selected characteristic points of an object invariably involves a great deal of redundant image data consuming the transmission energy for a visual sensor network. A novel approach is proposed to overcome this problem. The minimal number of sensors required for set coverage can be determined by predicting the direction and speed of the mobile object. Such sets are capable of covering the maximal number of characteristic points of view related to the mobile object at one time. The simulation results show that this approach reduces transmission cost while preserving the maximal coverage range of mobile objects.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.