Abstract

Recently, there have been rapid advances in high-resolution remote sensing image retrieval, which plays an important role in remote sensing data management and utilization. For content-based remote sensing image retrieval, low-dimensional, representative and discriminative features are essential to ensure good retrieval accuracy and speed. Dimensionality reduction is one of the important solutions to improve the quality of features in image retrieval, in which LargeVis is an effective algorithm specifically designed for Big Data visualization. Here, an extended LargeVis (E-LargeVis) dimensionality reduction method for high-resolution remote sensing image retrieval is proposed. This can realize the dimensionality reduction of single high-dimensional data by modeling the implicit mapping relationship between LargeVis high-dimensional data and low-dimensional data with support vector regression. An effective high-resolution remote sensing image retrieval method is proposed to obtain stronger representative and discriminative deep features. First, the fully connected layer features are extracted using a channel attention-based ResNet50 as a backbone network. Then, E-LargeVis is used to reduce the dimensionality of the fully connected features to obtain a low-dimensional discriminative representation. Finally, L2 distance is computed for similarity measurement to realize the retrieval of high-resolution remote sensing images. The experimental results on four high-resolution remote sensing image datasets, including UCM, RS19, RSSCN7, and AID, show that for various convolutional neural network architectures, the proposed E-LargeVis can effectively improve retrieval performance, far exceeding other dimensionality reduction methods.

Highlights

  • With the rapid development of high-resolution remote sensing and ground observation technology over recent years, the quantity of remote sensing imagery data has increased exponentially

  • Content-based image retrieval (CBIR) [1] as a mainstream retrieval solution was proposed in the 1990s and has gradually developed; this has been widely applied in high-resolution remote sensing image retrieval (RSIR)

  • An extended LargeVis (E-LargeVis) dimensionality reduction method was proposed for high-resolution RSIR

Read more

Summary

Introduction

With the rapid development of high-resolution remote sensing and ground observation technology over recent years, the quantity of remote sensing imagery data has increased exponentially. Content-based image retrieval (CBIR) [1] as a mainstream retrieval solution was proposed in the 1990s and has gradually developed; this has been widely applied in high-resolution remote sensing image retrieval (RSIR). CBIR includes two essential components: feature extraction and similarity measurement, in which image content is represented as image features, and the retrieval results are obtained by measuring feature similarity. In light of the diversity and complexity of image content, the extracted features are usually highly dimensional in order to effectively describe the image content. If high-dimensional data are directly applied to image retrieval, a “curse of dimensionality” will occur, so it is difficult for practical application. The reason for the “curse of dimensionality” is that the extreme sparseness of the real data distribution under the high-dimensional situation will lead to a large deviation when measuring the similarity of image features, while causing inefficient feature processing. An excellent dimension reduction method can effectively remove the redundancy among the data, and ensure that the retrieval performance will not reduce too much or perhaps even improve

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.