Abstract

Water-obstacles detection based on semantic segmentation is an essential part of autonomous navigation of unmanned surface vehicles (USV). However, it is difficult for existing methods to ensure the real-time of water-obstacles recognition and high detection accuracy. To address this issue, we proposed novel network architecture, a lightweight water-obstacles detection network (LWDNet). In LWDNet, we adopt a novel backbone, bottleneck structure with attention block, the former decrease the model size, the latter obtain more semantic information, and then the dilated convolution has been used in depthwise separable (DW) convolution to enforce the extraction of feature information. Additionally, by using improved focal loss (weight the main and auxiliary focal loss), the water-obstacles detection accuracy increased. In order to test the real-time performance and detection accuracy of LWDNet, we use the most challenging dataset, Multi-modal Marine Obstacle Detection Dataset 2 (MODD2) dataset, for experimental validation. The experimental results show that, compared with the state-of-the-art detection methods, such as WasR and ShorelineNet, the LWDNet maintains a much faster speed of images inference (62 frames-per-second on an NVIDIA RTX 2080Ti), and at the same time, the obstacles detection performance is equally high (87.5% F-measure, 77.8% IOU). Therefore, the LWDNet is an efficient network in water-obstacles detection, making the autonomous navigation of USV more reliable.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.