Abstract

Accurate ship tracking is very important for the security of maritime activities, especially the raising requirements of autonomous navigation applications, e.g., autonomous surface vehicles (ASVs). Unlike deep-learning-based object-tracking methods are prevailing in autonomous driving because of good environmental robustness and high tracking accuracy, few deep-tracking models can be found for maritime ships. The main reason for that is the lack of qualified ship datasets, especially datasets with ship-based perspectives. Therefore, a large-scale, high-definition dataset for ship tracking, LMD-TShip (Large Maritime Dataset), is provided in this paper. In this dataset, five types of ships are included, from cargo ships, fishing ships, passenger ships, and speed boats to unmanned ships. Specifically, LMD-TShip consists of 40,240 frames in 191 videos, each of which is carefully and manually annotated with bounding boxes in YOLO format. Moreover, 13 attributes are used to label videos, e.g., scale variation (SV), occlusion (OCC), basically covering tracking challenges of maritime ship tracking. Next, a detailed analysis is carried out to demonstrate the characteristics of LMD-TShip. Finally, experiments with five baseline short-term tracking models on the dataset are performed, e.g., ECO, SiamRPN++, and the experimental results demonstrate its good evaluation ability, which will provide effective means for training and testing tracking models related to maritime ships.

Highlights

  • As a part of autonomous driving, autonomous surface vehicles (ASVs) are arising as a new research field

  • As a part of autonomous driving, ASVs are arising as a new research field

  • To standardize the dataset, inspired by [16], [43], 13 attributes were deployed to describe the videos based on the characteristics of maritime ships, including scale variation (SV), aspect-ratio change (ARC), fast and irregular motion (FIM), low resolution (LR), out-of-view (OV), illumination variation (IV), image quality (IQ), max scale variation (SVM), camera motion (CM), background clutter (BC), similar object (SOB), occlusion (OCC), and viewpoint change (VC)

Read more

Summary

INTRODUCTION

As a part of autonomous driving, ASVs are arising as a new research field. Achieving autonomous navigation needs ASVs to accurately sense environment, and track surrounding objects, especially ships, to make sure sailing safety. The images of Seagull was captured in bird-eyes view, and it is difficult to train deep trackers used for applications with totally different perspectives, such as ASVs. To develop a dataset with ship-based perspectives, an abundant dataset, the Singapore Maritime Dataset [12] (SMD), which collected ship images on-shore and on-board, was considered. A large-scale maritime ship dataset with ship-based perspectives, which can be used for training and testing deep trackers, is still urgently needed. There were several traditional tracking methods, such as hidden Markov models [39], Kalman filters [10], [40], and optical flow [41], SOTA deeplearning-based trackers specially designed for ship tracking were rare due to the lack of an available dataset

DATA ACQUISITION
EXPERIMENT
RESULTS AND ANALYSIS
Findings
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call