Abstract

Abstract. The Suez Canal, strategically located as the shortest international sea route, plays a crucial role in facilitating the transportation of goods between Asia and Europe. However, the occurrence of traffic disruptions within the Canal poses a serious threat to global trade, as evidenced by the recent incident of the container ship Ever Given, which ran aground in the Suez Canal on March 23, 2021. This event led to a complete blockade of the Canal that lasted for six days, resulting in a fleet of ships waiting to pass through the Canal. This highlights the need to monitor the Canal to prevent similar disturbances in the future. In this paper, we propose a CNN-based attention-guided self-learning framework for ship detection from 3m high-resolution COSMO-SkyMed SAR imagery acquired in April 2021 via the Egyptian Suez Canal. We introduce a self-learning augmented segmentation (SLAS) technique to augment the dataset with new ship samples by pseudo-labeling an unlabeled dataset. We also present the Attention-guided Feature Refinement (AFR) module to extract more significant semantic features and contextual information, especially for ships of varying sizes in SAR images. Finally, the AFR module is fed into a Region Proposal Network (RPN) to generate a set of proposal anchors, which are later used in a Deep Detection Network (DDN) for ship classification and localization. Our experimental results demonstrate that the proposed method outperforms current state-of-the-art detection models in terms of detection accuracy, particularly in complex coastal scenes, with an overall accuracy of up to 87% mean average precision (mAP).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.