Abstract

Multi-Object Tracking (MOT) aims to track the trajectories of multiple objects simultaneously in a video over time. In MOT, object detection is used to locate instances of objects in images or videos, and re-identification is employed to match objects with the same ID. Among recent MOT methods, one-shot MOT, which relies on a single neck and head for object detection and appearance feature extraction, faces a conflict between object category classification and unique ID classification. To address this challenge, we propose a solution that integrates an FPN neck-based appearance feature extraction module into YOLOX. This method effectively mitigates the conflict between object category classification and unique ID classification in one-shot MOT. Furthermore, the proposed variable offset mechanism enables correction of the extraction position, even in cases of object occlusion. The proposed technique achieved 83.7% mIDF1 and 75% mMOTA in the experimental results for the TITAN dataset. This improved mIDF1 and mMOTA by 0.2% and 0.2% respectively over the IOU-based MOT, and by 1.2% and 0.7% respectively over the one-shot MOT.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.