Abstract

Recently, salient object detection in optical remote-sensing images (RSIs) has received more and more attention. To tackle the challenges of RSIs including large-scale variation of objects, cluttered background, irregular shape of objects, and big difference in illumination, the cutting-edge convolutional neural network (CNN)-based models are proposed and have achieved an encouraging performance. However, the performance of the top-level models usually depends on the large model size and high computational cost, which limits their practical applications. To remedy the issue, we introduce a fully squeezed multiscale (FSM) module to equip the entire network. Specifically, the FSM module squeezes the feature maps from high dimension to low dimension and introduces the multiscale strategy to endow the capability of feature characterization with different receptive fields and different contexts. Based on the FSM module, we build the FSM inference network (FSMI-Net) to pop-out salient objects from optical RSIs, which is with fewer parameters and fast inference speed. Particularly, the proposed FSMI-Net only contains 3.6M parameters, and its GPU running speed is about 28 fps for <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$384 \times 384$ </tex-math></inline-formula> inputs, which is superior to the existing saliency models targeting optical RSIs. Extensive comparisons are performed on two public optical RSIs datasets, and our FSMI-Net achieves comparable detection accuracy when compared with the state-of-the-art models, where our model realizes a balance between the computational cost and detection performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call