Abstract
Multiple Object Tracking (MOT) methods based on per-pixel prediction and association have achieved remarkable progress recently. These approaches prefer to select points in central region of the bounding box as positive samples and other points as negative samples. Under severe occlusions, these sample allocation methods might lead to the central region being contaminated by occluding samples, resulting in significant degradation of both detection and association performance. To address this issue, we propose a novel Dynamic-Center-Point-based Multiple Object Tracking method (DCP-MOT), which aims to self-identify the visible center region for occluded objects. Compared with the previous bounding box center point, our Dynamic-Center-Point (DCP) can better represent the visible region of the occluded object. Specifically, we design an Iterative Refinement Branch to generate dynamic center points for each occluded object. It includes two parts: Center Probability Predictor and Center Generator. Initially, we utilize the Center Probability Predictor to derive an accurate probability map for the occluded object. Subsequently, we use Center Generator to quantify the probability map by introducing a Mutual Exclusive Potential Function, yielding a dynamic center point for the occluded object that is distinct from its occluding counterpart. Finally, by joining the bounding box center points for unoccluded objects and our dynamic center points for occluded objects, our JDE branch can achieve better tracking performance. Extensive experiments demonstrate our DCP-based MOT method surpasses the bounding box center based SOTA in metrics MOTA (+0.7,+2.1,+1.3) and IDF1 (+0.7,+0.9,+1.5) on the challenging MOT16, MOT17, MOT20 datasets.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.