Abstract

ABSTRACT Black and odorous water (BOW) is a common issue in rapidly urbanizaing developing countries. Existing methods for extracting BOW from remote sensing images focus mainly on spectral information and ignores important spatial characteristics like texture, context and orientation. Deep learning has emerged as a powerful approach for BOW extraction, but its effectiveness is hindered by limited amount of labeled data and a small proportion of objects. In this paper, we proposed a fully convolutional adversarial network (FANet) for end-to-end pixel-level semantic segmentation of BOW.. FANet combines a fully convolutional network (FCN) with a larger receptive field and perceptual loss, and employs adversarial learning to enhance stability in the absence of sufficient data labels. The Normalized Difference BOW Index, which can reflect the higher spectral reflectance of BOW in the near-infrared band, is used as the input of FANet together with RGB. In addition, we create a standard BOW dataset containing 5100 Gaofen-2 of 224 × 224 pixels. Evaluation of FANet on BOW dataset using intersection over union and F1-score demonstrates its superiority over popular models like FCN, U-net, and Segnet. FANet successfully preserves the integrity, continuity, and boundaries of BOW, achieving superior performance in both quantity and quality of BOW extraction.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.