Abstract

With the cost reduction of underwater sensor network nodes and the increasing demand for underwater detection and monitoring, near-land areas, shallow water areas, lakes and rivers have gradually tended to densely arranged sensor nodes. In order to achieve real-time monitoring, most nodes now have visual sensors instead of acoustic sensors to collect and analyze optical images, mainly because cameras might be more advantageous when it comes to dense underwater sensor networks. In this article, image enhancement, saliency detection, calibration and refraction model calculation are performed on the video streams collected by multiple optical cameras to obtain the track of the dynamic target. This study not only innovatively combines the application of AOD-Net’s (all-in-one network) image defogging algorithm with underwater image enhancement, but also refers to the BASNet (Boundary-Aware Salient network) network architecture, introducing frame difference results in the input to reduce the interference of static targets. Based on the aforementioned technologies, this paper designs a dynamic target tracking system centered on video stream processing in dense underwater networks. As part of the process, most nodes carried underwater cameras. When the dynamic target could be captured by at least two nodes in the network at the same time, the target position could then be calculated and tracked.

Highlights

  • Advances in underwater sensor network have facilitated a wide variety of exciting scientific applications [1,2,3], including automated surveys of underwater environments and underwater object detection [4,5,6]

  • There is a large number of underwater-based wireless sensor network node positioning algorithms

  • Most recent successful target detection methods are based on convolutional neural networks (CNN), and other researchers have already designed many different layouts on this basis

Read more

Summary

Introduction

Advances in underwater sensor network have facilitated a wide variety of exciting scientific applications [1,2,3], including automated surveys of underwater environments and underwater object detection [4,5,6]. Localization schemes aim to achieve large coverage, low communication, high accuracy, low deployment cost and good scalability [8,9,10] It can be seen from the above methods that most of the positioning methods in the underwater acoustic network use acoustic communication, optical communication or ultra-short baseline, requiring high network connection and time synchronization between nodes, to obtain only the location and volume information of target objects. The model here proposed can detect areas with high saliency in the optical video, and reduce the interference of static but high saliency objects in the screen by changing the network model to obtain accurate position information of the target, and assist the navigation and positioning of the AUV or simplify the target object detection task. This paper constructs a mathematical model to convert the target pixel coordinates in the image collected by the multi-source node into world coordinates as a way to complete the calculation and detection of the target position information

System Design and Process Introduction
Saliency
Image Preprocessing
Method
Saliency Detection
Results of of single frame
Tracking
Experiment Description and Results
Linear
REVIEW Point 2
Future
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call