Abstract

Recent online multi-object tracking approaches combine single object trackers and affinity networks with the aim of capturing object motions and associating objects by using their appearance, respectively. Those affinity networks often build on complex feature representations (re-ID embeddings) or sophisticated scoring functions, whose objective is to match current detections with previous tracklets, known as short-term appearance information. However, drastic appearance changes during the object trajectory acquired by omnidirectional cameras causes a degradation of the performance since affinity networks ignore the variation of the long-term appearance information. In this paper, we deal with the appearance changes in a coherent way by proposing a novel affinity model which is able to predict the new visual appearance of an object by considering the long-term appearance information. Our affinity model includes a convolutional LSTM encoder-decoder architecture to learn the space-time appearance transformation metric between consecutive re-ID feature representations along the object trajectory. Experimental results show that it achieves promising performance on several multi-object tracking datasets containing omnidirectional cameras.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.