Abstract

Monocular multiple-object tracking is a fundamental yet under-addressed computer vision problem. In this paper, we propose a novel learning framework for tracking multiple objects by detection. First, instead of heuristically defining a tracking algorithm, we learn that a discriminative structure prediction model from labeled video data captures the interdependence of multiple influence factors. Given the joint targets state from the last time step and the observation at the current frame, the joint targets state at the current time step can then be inferred by maximizing the joint probability score. Second, our detection results benefit from tracking cues. The traditional detection algorithms need a nonmaximal suppression postprocessing to select a subset from the total detection responses as the final output and a large number of selection mistakes are induced, especially under a congested circumstance. Our method integrates both detection and tracking cues. This integration helps to decrease the postprocessing mistake risk and to improve performance in tracking. Finally, we formulate the entire model training into a convex optimization problem and estimate its parameters using the cutting plane optimization. Experiments show that our method performs effectively in a large variety of scenarios, including pedestrian tracking in crowd scenes and vehicle tracking in congested traffic.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.