Abstract

We present an approach to track several subjects from video sequences acquired by multiple cameras in real time. We address the key concerns of real time performance and continuity of tracking in overlapping and nonoverlapping fields of view. Each human subject is represented by a parametric ellipsoid having a state vector that encodes its position, velocity and height. We also encode visibility and persistence to tackle problems of distraction and short-period occlusion. To improve likelihood computation from different viewpoints, including the relocation of subjects after network blind spots, the colored and textured surface of each ellipsoid is learned progressively as the subject moves through the scene. This is combined with the information about subject position and velocity to perform camera handoff. For real time performance, the boundary of the ellipsoid can be projected several hundred times per frame for comparison with the observation image. Further, our implementation employs a particle filter, developed for parallel implementation on a graphics processing unit. We have evaluated our algorithm on standard data sets using metrics for multiple object tracking accuracy (MOTA) and speed of processing, and can show significant improvements in comparison with published work.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.