Abstract

Object detection in optical remote sensing images is still a challenging task because of the complexity of the images. The diversity and complexity of geospatial object appearance and the insufficient understanding of geospatial object spatial structure information are still the existing problems. In this paper, we propose a novel multi-model decision fusion framework which takes contextual information and multi-region features into account for addressing those problems. First, a contextual information fusion sub-network is designed to fuse both local contextual features and object-object relationship contextual features so as to deal with the problem of the diversity and complexity of geospatial object appearance. Second, a part-based multi-region fusion sub-network is constructed to merge multiple parts of an object for obtaining more spatial structure information about the object, which helps to handle the problem of the insufficient understanding of geospatial object spatial structure information. Finally, a decision fusion is made on all sub-networks to improve the stability and robustness of the model and achieve better detection performance. The experimental results on a publicly available ten class data set show that the proposed method is effective for geospatial object detection.

Highlights

  • Nowadays, optical remote sensing images with high spatial resolution are obtained conveniently due to the significant progress in remote sensing technology, which leads to a wide range of applications such as land planning, disaster control, urban monitoring, and traffic planning [1,2,3,4]

  • (1) We propose a local contextual information and object-object relationship contextual information fusion sub-network based on gated recurrent unit (GRU) to form discriminative feature representation, which can effectively recognize objects and reduce false detection between different types of objects with similar appearance

  • We believe that the local contextual information and the object-object relationship context are very useful for object detection in optical remote sensing images

Read more

Summary

Introduction

Optical remote sensing images with high spatial resolution are obtained conveniently due to the significant progress in remote sensing technology, which leads to a wide range of applications such as land planning, disaster control, urban monitoring, and traffic planning [1,2,3,4]. We select a CNN-based approach to extract features for object detection in optical remote sensing images. 2019, 11, 737 part-based model (GDPBM), which divides a geospatial object with arbitrary orientation into several parts to achieve good performance for object detection in optical remote sensing images. Focusing on the insufficient understanding of geospatial object spatial structure information, we construct a part-based multi-region feature fusion sub-network. (1) We propose a local contextual information and object-object relationship contextual information fusion sub-network based on gated recurrent unit (GRU) to form discriminative feature representation, which can effectively recognize objects and reduce false detection between different types of objects with similar appearance. (3) We propose a multi-model decision fusion strategy to fuse the detection results of the three sub-networks, which can improve the stability and robustness of the model and obtain better algorithm performance. The last section concludes this paper with a discussion of the results

Geospatial Object Detection
Contextual Information Fusion
The RoIAlign Layer
Proposed Framework
B Baseline sub-network
Part-Based Multi-Region Fusion Sub-Network
Multi-Model Decision Fusion Strategy
Experiments and Results
Data Set
Evaluation Metrics
Implementation Details and Parameter Settings
Evaluation of Part-Based Multi-Region Fusion Network
Evaluation of Multi-model Decision Fusion Strategy
Comparisons with Other Detection Methods
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.