Abstract

Most cross-view image matching algorithms focus on designing network structures with excellent performance, ignoring the content information of the image. At the same time, there are non-fixed targets such as cars, ships, and pedestrians in ground perspective images and aerial perspective images. Differences in perspective, direction, and scale cause serious interference with the cross-view matching process. This paper proposes a cross-view image matching method with feature enhancement, which first transforms the empty image to generate a transformation image aligned with the ground–aerial image domain to establish a preliminary geometric correspondence between the ground-space image. Then, the rich feature information of the deep network and the edge information of the cross-convolution layer are used to establish the feature correspondence between the ground-space images. The feature fusion module enhances the tolerance of the network model to scale differences, improving the interference problem of transient non-fixed targets on the matching performance in the images. Finally, the maximum pooling and feature aggregation strategies are adopted to aggregate local features with obvious distinguishability into global features to complete the accurate matching between ground images. The experimental results show that the proposed method has good advance and high accuracy on CVUSA, which is commonly used in public datasets, reaching 92.23%, 98.47%, and 99.74% on the top 1, top 5 and top 10 indicators, respectively, outperforming the original method in the dataset with a limited field of view and image center, better completing the cross-perspective image matching task.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.