Abstract

During underwater intervention and monitoring operations, large amounts of imagery and sensory data are produced and stored. Moreover, these data might have the potential to help automatize future operations. In this article, we propose a method for producing segmented data that can be utilized in training of neural networks used for performing desired control tasks. By combining unlabeled images from previous operations with a paired 3-D drawing of the same monitored object, we can train a generative adversarial network to learn the domain adaption between two domains and produce synthetic images with a high resemblance to the original footage. The clear advantage is that the enhanced synthetic images contain the segmented information of the object without the expensive cost of manually segmenting the images. The enhanced segmented data are used to train an object detector that predicts bounding boxes locating the segmented object. This is used in two ways: 1) to analyze the quality of the segmentation and 2) to command a control task for a remotely operated vehicle relative to the object of interest. Controlling the yaw is the main control task to maintain the object centered in the camera frame. Additionally, we explore how overtraining of the domain adaptation network negatively impacts the accuracy of the object detector. This is done by comparing the accuracy of the object detector trained with the datasets produced by different epochs of the domain adaptation network.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call