Abstract

Abstract Pan-tilt-zoom (PTZ) camera networks play an important role in surveillance systems. They can direct attention to interesting events in the scene. One method to achieve such behavior is a process known as sensor slaving: One master camera (or more) monitors a wide area and tracks moving targets to provide positional information to one slave camera (or more). The slave camera can thus foveate at the targets in high resolution. In this chapter we consider the problem of estimating online the time-variant transformation of a human's foot position in the image of a fixed camera relative to his head position in the image of a PTZ camera. The transformation achieves high-resolution images by steering the PTZ camera at targets detected in a fixed camera view. Assuming a planar scene and modeling humans as vertical segments, we present the development of an uncalibrated framework that does not require any known 3D location to be specified and that takes into account both zooming camera and target uncertainties. Results show good performances in localizing the target's head in the slave camera view, degrading when the high zoom factor causes a lack of feature points. A cooperative tracking approach exploiting an instance of the proposed framework is presented.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.