Abstract

Transformers have demonstrated impressive expressiveness and transfer capability in computer vision fields. Dense prediction is a fundamental problem in computer vision that is more challenging to solve than general image-level prediction tasks. The inherent properties of transformers enable them to process feature representations with stable and relatively high resolution, which precisely satisfies the demands of dense prediction tasks for finer-grained and more globally coherent predictions. Furthermore, compared to convolutional networks, transformer methods require minimal inductive bias and permit long-range information interaction. These strengths have contributed to exciting advancements in dense prediction tasks that apply transformer networks. This survey aims to provide a comprehensive overview of transformer models with a specific focus on dense prediction. In this survey, we provide a well-rounded view of state-of-the-art transformer-based approaches, explicitly emphasizing pixel-level prediction tasks. We generally consider transformer variants from the network architecture perspective. We further propose a novel taxonomy to organize these models according to their constructions. Subsequently, we examine various specific optimization strategies to tackle certain bottleneck problems in dense prediction tasks. We explore the commonalities and differences among these works and provide multiple horizontal comparisons from the experimental point of view. Finally, we summarize several stubborn problems that continue to impact visual transformers and outline some possible development directions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call