Recently, the Vision Transformer (ViT), which applied the Transformer structure to various visual detection tasks, has outperformed convolutional neural networks (CNNs). Nonetheless, due to the lack of scale representation ability of the Transformer, how to extract the local features of the scene to effectively form a global descriptor is still a challenging problem. In the paper, we propose a Cross-Spatial Pyramid Transformer (CSPFormer) to learn the discriminative global descriptors from multi-scale visual features for efficient visual place recognition. Specifically, we first develop a pyramid CNN module that can extract multi-scale visual feature representations. Then, the extracted feature representations of multi-scales are input to multiple connected spatial pyramid Transformer modules that adaptively learn the spatial relationship of the different scale descriptors, where the multiple self-attention is applied to learn a global descriptor from discriminative local descriptors. CNN pyramid features and Transformer multi-scale features are mutually weighted to perform cross-spatial feature representation. The multiple self-attention enhances the long-term dependencies of multi-scale visual descriptors and reduces the computational cost. To obtain the final place-matching result accurately, the cosine function is used to calculate the spatial similarity between the two scenes. Experimental results on public place datasets show that the proposed method achieves state-of-the-art on large-scale visual place recognition tasks. Our model has achieved 94.7%, 92.8%, 91.3%, and 95.7% average recall based on the top 1% candidate scenario on KITTI, Nordland, VPRICE, and EuRoc datasets, respectively.
Read full abstract