Abstract

Multiple object tracking (MOT) is an essential task in computer vision, with many practical applications in surveillance, robotics, autonomous driving, and biology. To compare different MOT algorithms efficiently and select the best MOT algorithm for an application, we rely on tracking metrics that reduce the performance of a tracking algorithm to a single score. However, there is a lack in testing the tracking metrics themselves, which can result in unnoticed biases or flaws in tracking metrics that can influence the decision of selecting the best tracking algorithm. To check tracking metrics for possible limitations or biases towards penalizing specific tracking errors, a standardized evaluation of tracking metrics is needed. We propose benchmarking tracking metrics using synthetic, erroneous tracking results that simulate real-world tracking errors. First, we select common real-world tracking errors from the literature and describe how to emulate them. Then, we validate our approach by reproducing previously found tracking metric limitations through simulating specific tracking errors. In addition, our benchmark reveals a before unreported limitation in the tracking metric AOGM. Moreover, we make an implementation of our benchmark publicly available.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call