Abstract

Object triangulation, 3-D object tracking, feature correspondence, and camera calibration are key problems for estimation from camera networks. This paper addresses these problems within a unified Bayesian framework for joint multi-object tracking and camera calibration, based on the finite set statistics methodology. In contrast to the mainstream approaches, an alternative parametrization is investigated for triangulation, called disparity space. The approach for feature correspondence is based on the probability hypothesis density (phd) filter, and hence inherits the ability to handle the initialization of new tracks as well as the discrimination between targets and clutter within a Bayesian paradigm. The phd filtering approach then forms the basis of a camera calibration method from static or moving objects. Results are shown on simulated and real data.

Highlights

  • D ETECTION, localization and tracking of an object's state from active sensors, such as, e.g., radar, range-finding laser and sonar, are usually determined from the sensor measurements using a stochastic filter, such as the Kalman filter [26], to provide statistically optimal estimates

  • The objective of this paper is to describe a statistical framework for joint 3-D object state estimation and camera calibration, which considers both the geometry and the observation characteristics of the cameras

  • We have described the procedure for two cameras, the approach can be straightforwardly extended to more cameras by introducing a disparity space for each camera

Read more

Summary

Introduction

D ETECTION, localization and tracking of an object's state from active sensors, such as, e.g., radar, range-finding laser and sonar, are usually determined from the sensor measurements using a stochastic filter, such as the Kalman filter [26], to provide statistically optimal estimates. When the use of active sensors is not possible, passive sensors, such as cameras, are the alternative. Calculating the distance of objects from cameras requires triangulation. The traditional means of triangulation from a pair of image observations are well known if the observations of the object are perfect, in which case the triangulated position can be calculated using knowledge of the sensor geometry [15], . Manuscript received April 27, 2015; revised November 11, 2015; accepted January 14, 2016. Date of publication February 03, 2016; date of current version April 18, 2016. The associate editor coordinating the review of this manuscript and approving it for publication was Prof.

Objectives
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.