Abstract

Tracking and action-recognition algorithms are currently widely used in video surveillance, monitoring urban activities and in many other areas. Their development highly relies on benchmarking scenarios, which enable reliable evaluations/improvements of their efficiencies. Presently, benchmarking methods for tracking and action-recognition algorithms rely on manual annotation of video databases, prone to human errors, limited in size and time-consuming. Here, using gained experiences, an alternative benchmarking solution is presented, which employs methods and tools obtained from the computer-game domain to create simulated video data with automatic annotations. Presented approach highly outperforms existing solutions in the size of the data and variety of annotations possible to create. With proposed system, a potential user can generate a sequence of random images involving different times of day, weather conditions, and scenes for use in tracking evaluation. In the design of the proposed tool, the concept of crowd simulation is used and developed. The system is validated by comparisons to existing methods.

Highlights

  • Person tracking and action-recognition algorithms for video streaming data recently focus a lot of interest, inspired and motivated by variety of possible applications

  • A good comparison can be observed between parameters MT and ML, for which for mostly tracked the higher value is desired (IOU had the highest overall score), whereas in mostly lost the lower value is expected (IOU had the lowest overall value)

  • Among the tested tracking methods, the intersection over union (IOU) algorithm garnered the best performance, and the Discrete continuous energy (DCT) algorithm the worst. The reason for the latter was that the search for optimal values of the DCT’s internal parameters was out of scope

Read more

Summary

Introduction

Person tracking and action-recognition algorithms for video streaming data recently focus a lot of interest, inspired and motivated by variety of possible applications. There are numerous approaches and achievements, often multistage and complicated, leading to different types of information extracted from data. Their further improvement and evolution require elaborating reliable evaluation and benchmarking scenarios. Existing benchmarking methods involve testing the reliability of tracking and action-recognition algorithms by employing public datasets manually annotated by their authors. The original release consisted of three subsets of benchmark data, the first designed for pedestrian count and density analysis, the second for pedestrian tracking, and the last for analysis of traffic flow and event recognition. With respect to evaluation of tracking methods, the most popular dataset is included in MOTChallenge [2,3,4,5], which includes available datasets with ground truth, validation MATLAB scripts and the possibility of uploading results so as to rank methods with respect to quality and accuracy

Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.