Abstract

Sea fog is a weather hazard along the coast and over the ocean that seriously threatens maritime activities. In the deep learning approach, it is difficult for convolutional neural networks (CNNs) to fully consider global context information in sea fog research due to their own limitations, and the recognition of sea fog edges is relatively vague. To solve the above problems, this paper puts forward an ECA-TransUnet model for daytime sea fog recognition, which consists of a combination of a CNN and a transformer. By designing a two-branch feed-forward network (FFN) module and introducing an efficient channel attention (ECA) module, the model can effectively take into account long-range pixel interactions and feature channel information to capture the global contextual information of sea fog data. Meanwhile, to solve the problem of insufficient existing sea fog detection datasets, we investigated sea fog events occurring in the Yellow Sea and Bohai Sea and their territorial waters, extracted remote sensing images from Moderate Resolution Imaging Spectroradiometer (MODIS) data at corresponding times, and combined data from the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO), cloud and sea fog texture features, and waveband feature information to produce a manually annotated sea fog dataset. Our experiments showed that the proposed model achieves 94.5% accuracy and an 85.8% F1 score. Compared with the existing models relying only on CNNs such as UNet, FCN8s, and DeeplabV3+, it achieves state-of-the-art performance in sea fog recognition.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.