Abstract

This paper presents a new model for multi-object tracking (MOT) with a transformer. MOT is a spatiotemporal correlation task among interest objects and one of the crucial technologies of multi-unmanned aerial vehicles (Multi-UAV). The transformer is a self-attentional codec architecture that has been successfully used in natural language processing and is emerging in computer vision. This study proposes the Vision Transformer Tracker (ViTT), which uses a transformer encoder as the backbone and takes images directly as input. Compared with convolution networks, it can model global context at every encoder layer from the beginning, which addresses the challenges of occlusion and complex scenarios. The model simultaneously outputs object locations and corresponding appearance embeddings in a shared network through multi-task learning. Our work demonstrates the superiority and effectiveness of transformer-based networks in complex computer vision tasks and paves the way for applying the pure transformer in MOT. We evaluated the proposed model on the MOT16 dataset, achieving 65.7% MOTA, and obtained a competitive result compared with other typical multi-object trackers.

Highlights

  • The rapid development of deep neural networks (DNN) [1] has increased interest in applying deep learning to computer vision [2,3,4]

  • Our work demonstrates that a transformer can replace a Convolutional neural networks (CNNs) as a backbone network to complete complex computer vision tasks, paving the way for complex vision models based on a pure transformer

  • We evaluated our baseline model on MOT16 and compared it with other typical trackers: DeepSORT [49], RAR16 [50], TAP [9], CNNMTT [10], and POI [11]

Read more

Summary

Introduction

The rapid development of deep neural networks (DNN) [1] has increased interest in applying deep learning to computer vision [2,3,4]. MOT can be divided into separate detection and embedding (SDE) and joint detection and embedding (JDE) models. Association modules link each object to one of the existing tracks through similarity metrics and complex association strategies [9] or create a new track. This method is accurate and progresses with the development of object detection and Re-ID, but using two separate models to detect objects and extract embedding ignores the potential structure that can be shared between them. Some recent work [16,17] combines object detection and Re-ID into a single network by multi-task learning, which greatly reduces MOT calculation

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.