In the task of using deep learning semantic segmentation model to extract water from high-resolution remote sensing images, multiscale feature sensing and extraction have become critical factors that affect the accuracy of image classification tasks. A single-scale training mode will cause one-sided extraction results, which can lead to “reverse” errors and imprecise detail expression. Therefore, fusing multiscale features for pixel-level classification is the key to achieving accurate image segmentation. Based on this concept, this paper proposes a deep learning scheme to achieve fine extraction of image water bodies. The process includes multiscale feature perception splitting of images, a restructured deep learning network model, multiscale joint prediction, and postprocessing optimization performed by a fully connected conditional random field (CRF). According to the scale space concept of remote sensing, we apply hierarchical multiscale splitting processing to images. Then, we improve the structure of the image semantic segmentation model DeepLabV3+, an advanced image semantic segmentation model, and adjust the feature output layer of the model to multiscale features after weighted fusion. At the back end of the deep learning model, the water boundary details are optimized with the fully connected CRF. The proposed multiscale training method is well adapted to feature extraction for the different scale images in the model. In the multiscale output fusion, assigning different weights to the output features of each scale controls the influence of the various scale features on the water extraction results. We carried out a large number of water extraction experiments on GF1 remote sensing images. The results show that the method significantly improves the accuracy of water extraction and demonstrates the effectiveness of the method.
Read full abstract