Abstract

Multiple objective tracking represents a crucial research focus within the realm of computer vision. It has a very broad application prospect in video surveillance, intelligent transportation, robot navigation and positioning and other fields.However, multi-target visual perception is affected by light, weather, occlusion and other factors, and is vulnerable to noise interference, faced with problems such as unstable image enhancement effect, target correlation accuracy and robustness.In this study, the PP-Human framework is applied to multi-target tracking. Through training and refinement of the PP-Human model, integration of a high-accuracy detector, enhancement of pedestrian re-identification (ReID) techniques, and optimization of the data association approach, the model's multi-object detection performance is enhanced, achieving efficient and precise multi-target tracking. Using the extended Market 150 dataset to comprehensively evaluate the performance of the proposed multi-objective tracking method, this paper improves the accuracy of multi-objective tracking by 5.0% and reaches 95.0% of MOTA through a series of optimization measures.To validate the efficacy and robustness of this approach, experimental evaluations were conducted. The refined PP-Human framework demonstrated strong performance in multi-target tracking tasks, establishing a reliable basis for practical applications such as pedestrian analysis, behavior recognition, and flow statistics.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.