Abstract

The identification and classification of obstacles in navigable and non-navigable regions, as well as the measurement of distances, are crucial topics of investigation in the field of autonomous navigation for unmanned surface vehicles (USVs). Currently, USVs mostly rely on LiDAR and ultrasound technology for the purpose of detecting impediments that exist on water surfaces. However, it is worth noting that these approaches lack the capability to accurately discern the precise nature or classification of those obstacles. Nevertheless, the limited optical range of unmanned vessels hinders their ability to comprehensively perceive the entirety of the surrounding information. A cooperative USV-UAV system is proposed to ensure the visual perception ability of USVs. The multi-object recognition, semantic segmentation, and obstacle ranging through USV and unmanned aerial vehicle (UAV) perspectives are selected to validate the performance of a cooperative USV-UAV system. The you only look once-X (YOLOX) model, the proportional–integral–derivative-NET (PIDNet) model, and distance measurements based on a monocular camera are utilized to realize these problems. The results indicate that by integrating the viewpoints of USVs and UAVs, a collaborative USV-UAV system, employing the aforementioned methods, can successfully detect and classify different objects surrounding the USV. Additionally, it can differentiate between navigable and non-navigable regions for unmanned vessels through visual recognition, while accurately determining the distance between the USV and obstacles.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call