Abstract

Due to continuous progress of urbanization in China, large area of natural surface has become impervious. Automatic extraction of impervious surface (IS) from high-resolution remote sensing images is important to urban planning and environmental management. Artificial identification of IS is time-consuming and laborious. It is valuable to develop more intelligent recognition patterns. In recent years, semantic segmentation models based on convolutional neural network (CNN) have made great progress in extraction of IS from remote sensing images. However, most existing models focus on improving accuracy and rarely consider computational efficiency. In order to keep balance between computing resource consumption, computing speed, and segmentation accuracy, we propose a lightweight semantic segmentation network model based on CNN, and we named it LWIBNet. LWIBNet uses an efficient encoding-decoding structure as the skeleton and connects the encoding part and the decoding part by the Skip Layer. Moreover, in order to reduce the number of parameters and speed up the calculation, we introduce improved Squeeze-and-Excitation (SE) module, inverted residuals, and depthwise separable convolution to form the Inv-Bottleneck (IB) module and use it as the core to build the LWIBNet model. On the computational complexity, LWIBNet and LWIBNet-TTA have the lowest FLOPs (14.14 G), and SegNet has the second lowest FLOPs, but SegNet is 3.2 times higher than LWIBNet (45.05 G vs 14.14 G). Both the LWIBNet model and classic models are tested and compared on the same data set. The results show that the LWIBNet model achieves a bit higher segmentation accuracy with less computation cost, and its computation speed is faster.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.