Abstract

The report presents the stages of developing a new method for automatic underwater manipulation operations with pre-known objects. Based on the proposed method, a system builds on point clouds obtained from the onboard multibeam sonar of an autonomous uninhabited underwater robot identifies an underwater object with a known shape. To determine the location and spatial orientation of the specified object, a pre-built three-dimensional model of this object is used, and converted to a point cloud and subjected to additional processing. Then, by projecting the desired trajectory of the manipulator’s tool onto the object surface obtained during triangulation of the cloud points belonging to it, and evaluating the relative position of the real and desired trajectories, the system checks the accuracy of combining the point clouds of the constructed model and the real object being scanned. Based on the assessment, a decision is made to work out the trajectory built on the object’s surface by the manipulator. Even if the device has a low bandwidth communication channel with the operator, then the desired and real trajectories can be sent to the operator’s console for additional coordination of the possibility of performing a given manipulation operation. The proposed method is based on algorithms for combining point clouds, triangulation, and trajectory projection. The developed system is implemented in the C++ programming language using the open libraries PCL and Eigen. When performing numerical simulation of the system operation in the V-REP environment, we used a multi-beam sonar model, as well as scanned scenes that include objects of work. The project is built on the CMake system and uses "OpenGL Core" as a graphical shell.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call