Abstract
The deep learning technique has proven to be effective in the classification and localization of objects on the image or ground plane over time. The strength of the technique’s features has enabled researchers to analyze object trajectories across multiple cameras for online multi-object tracking (MOT) systems. In the past five years, these technical features have gained a reputation in handling several real-time multiple object tracking challenges. This contributed to the increasing number of proposed deep learning methods (DLMs) and networks seen by the computer vision community. The technique efficiently handled various challenges in real-time MOT systems and improved overall tracking performance. However, it experienced difficulties in the detection and tracking of objects in overcrowded scenes and motion variations and confused appearance variations. Therefore, in this paper, we summarize and analyze the 95 contributions made in the past five years on deep learning-based online MOT methods and networks that rank highest in the public benchmark. We review their expedition, performance, advantages, and challenges under different experimental setups and tracking conditions. We also further categorize these methods and networks into four main themes: Online MOT Based Detection Quality and Associations, Real-Time MOT with High-Speed Tracking and Low Computational Costs, Modeling Target Uncertainty in Online MOT, and Deep Convolutional Neural Network (DCNN), Affinity and Data Association. Finally, we discuss the ongoing challenges and directions for future research.
Highlights
In the past five years, deep learning-based online multi-object tracking (MOT) paradigms have been inferior to sparse principal component analysis [1], [2]
Wen et al [26] capitalized on this theorem by creating CLEAR MOT evaluation metrics that have been implemented in neoteric work on deep learning-based real-time MOT methods, multi-camera tracking techniques (MCTs), and Deep Convolutional Neural Network (DCNN) with the trackingby-detection (TBD) approach to track objects across multiple frames [19], [26]
These evaluation metrics enabled the standard calculations and presentation of multiple object tracking results on false positive (FP), false negative (FN), false alarm (FA), fragments of target trajectories (FM), multiobject tracking accuracy (MOTA), and multi-object tracking precision (MOTP) of public datasets created based on both single camera and multi-camera video capturing on different environmental scenes
Summary
In the past five years, deep learning-based online multi-object tracking (MOT) paradigms have been inferior to sparse principal component analysis [1], [2]. Wen et al [26] capitalized on this theorem by creating CLEAR MOT evaluation metrics that have been implemented in neoteric work on deep learning-based real-time MOT methods, multi-camera tracking techniques (MCTs), and DCNNs with the trackingby-detection (TBD) approach to track objects across multiple frames [19], [26] These evaluation metrics enabled the standard calculations and presentation of multiple object tracking results on false positive (FP), false negative (FN), false alarm (FA), fragments of target trajectories (FM), multiobject tracking accuracy (MOTA), and multi-object tracking precision (MOTP) of public datasets created based on both single camera and multi-camera video capturing on different environmental scenes. I ct where ct denotes the number of matches in frame t and dit is the bounding box overlap per frame target with its assigned ground truth objects
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.