Abstract

With great significance in military and civilian applications, detecting indistinguishable small objects in wide-scale remote sensing images is still a challenging topic. In this letter, we propose a specially optimized one-stage network (SOON) focusing on extracting spatial information of high-resolution images by understanding and analyzing the combination of feature and semantic information of small objects. The SOON model consists of feature enhancement, multiscale detection, and feature fusion. The first part is implemented by constructing a receptive field enhancement (RFE) module and incorporating it into the network’s specific parts where the information of small objects mainly exists. The second part is achieved by four detectors with different sensitivities, which access to the fused and enhanced features to enable the network to make full use of features in different scales. The third part consolidates the high-level and low-level features by adopting upsampling, concatenation, and convolution operations to build a feature pyramid structure, which explicitly yields strong feature representation and semantic information. In addition, we introduce the soft-nonmaximum suppression to preserve accurate bounding boxes in the postprocessing stage for densely arranged objects. Note that the split and merge strategy and the multiscale training strategy are employed. Extensive experiments and thorough analysis are performed on the NorthWestern Polytechnical University Very-High-Resolution (NWPU VHR)-10-v2 data set and the airplane, car and ship (ACS) data set as compared with several state-of-the-art methods. The satisfactory performance in experiments verifies the effectiveness of the design and optimization.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.