Abstract

Pan–tilt–zoom (PTZ) camera networks have an important role in surveillance systems. They have the ability to direct the attention to interesting events that occur in the scene. One method to achieve such behavior is to use a process known as sensor slaving: one (or more) master camera monitors a wide area and tracks moving targets so as to provide the positional information to one (or more) slave camera. The slave camera can thus point towards the targets in high resolution. In this paper we describe a novel framework exploiting a PTZ camera network to achieve high accuracy in the task of relating the feet position of a person in the image of the master camera, to his head position in the image of the slave camera. Each camera in the network can act as a master or slave camera, thus allowing the coverage of wide and geometrically complex areas with a relatively small number of sensors. The proposed framework does not require any 3D known location to be specified, and allows to take into account both zooming and target uncertainties. Quantitative results show good performance in target head localization, independently from the zooming factor in the slave camera. An example of cooperative tracking approach exploiting with the proposed framework is also presented.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.