Abstract

Sea ice monitoring plays a critical role in any icebreaker's journey, where standard procedures are in place to document and report sea ice types and concentration. In this paper, we propose semantic segmentation for automated detection and classification of sea ice types using camera feeds onboard an ice breaker. For this purpose, we evaluate the SegNet and PSPNet101 neural network architectures, which have proven success in navigation and mapping applications such as self-driving cars, remote sensing, and medical imagery. The networks are used to segment images based on two custom datasets, one with four classes: ice, ocean, vessel, and sky, i.e., sea ice detection dataset, and the second with eight classes: ocean, vessel, sky, lens artifacts, first-year ice, new ice, grey ice, and multiyear ice, i.e., sea ice classification dataset. The Nathaniel B. Palmer imagery, which captured 2-month footage of the icebreaker completing an Antarctic expedition was used in the creation of both datasets. A subset of the dataset was labeled to generate a 240-image training set for sea ice detection achieving an accuracy of 98% classification for the 26-image test set. The sea ice classification dataset consists of 1,090 labeled images achieving accuracies of 98.3% or greater for all ice types for the 104-image test set. These results validate the applicability of deep learning methods for sea ice detection and classification using images captured onboard an ice breaker, which can be further enhanced by incorporating additional ice types and operational data to support marine navigation and mapping applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call