Abstract

The efficient and timely identification of oil spill areas is crucial for ocean environmental protection. Synthetic aperture radar (SAR) is widely used in oil spill detection due to its all-weather monitoring capability. Meanwhile, existing deep learning-based oil spill detection methods mainly rely on the classical U-Net framework and have achieved impressive results. However, SAR images exhibit high noise, blurry boundaries, and irregular shapes of target areas, as well as speckles and shadows, which lead to the loss of performance in existing algorithms. In this paper, we propose a novel network architecture to achieve more precise segmentation of oil spill areas by reintroducing rich semantic contextual information before obtaining the final segmentation mask. Specifically, the proposed architecture can re-fuse feature maps from different levels at the decoder end. We design a multi-convolutional layer (MCL) module to extract basic feature information from SAR images, and a feature extraction module (FEM) module further extracts and fuses feature maps generated by the U-Net decoder at different levels. Through these operations, the network can learn rich global and local contextual information, enable sufficient interaction of feature information at different stages, enhance the model’s contextual awareness, and improve its ability to recognize complex textures and blurry boundaries, thereby enhancing the segmentation accuracy of SAR images. Compared to many U-Net based segmentation networks, our method shows promising results and achieves state-of-the-art performance on multiple evaluation metrics.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.