Abstract

AbstractAchieving high‐performance in multi‐object tracking algorithms heavily relies on modelling spatial‐temporal relationships during the data association stage. Mainstream approaches encompass rule‐based and deep learning‐based methods for spatial‐temporal relationship modelling. While the former relies on physical motion laws, offering wider applicability but yielding suboptimal results for complex object movements, the latter, though achieving high‐performance, lacks interpretability and involves complex module designs. This work aims to simplify deep learning‐based spatial‐temporal relationship models and introduce interpretability into features for data association. Specifically, a lightweight single‐layer transformer encoder is utilised to model spatial‐temporal relationships. To make features more interpretative, two contrastive regularisation losses based on representation alignment are proposed, derived from spatial‐temporal consistency rules. By applying weighted summation to affinity matrices, the aligned features can seamlessly integrate into the data association stage of the original tracking workflow. Experimental results showcase that our model enhances the majority of existing tracking networks' performance without excessive complexity, with minimal increase in training overhead and nearly negligible computational and storage costs.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.