Abstract
This paper addresses the problem of object recognition from colorless 3D point clouds in underwater environments. It presents a performance comparison of state-of-the-art global descriptors, which are readily available as open source code. The studied methods are intended to assist Autonomous Underwater Vehicles (AUVs) in performing autonomous interventions in underwater Inspection, Maintenance and Repair (IMR) applications. A set of test objects were chosen as being representative of IMR applications whose shape is typically known a priori. As such, CAD models were used to create virtual views of the objects under realistic conditions of added noise and varying resolution. Extensive experiments were conducted from both virtual scans and from real data collected with an AUV equipped with a fast laser sensor developed in our research centre. The underwater testing was conducted from a moving platform, which can create deformations in the perceived shape of the objects. These effects are considerably more difficult to correct than in above-water counterparts, and therefore may affect the performance of the descriptor. Among other conclusions, the testing we conducted illustrated the importance of matching the resolution of the database scans and test scans, as this significantly impacted the performance of all descriptors except one. This paper contributes to the state-of-the-art as being the first work on the comparison and performance evaluation of methods for underwater object recognition. It is also the first effort using comparison of methods for data acquired with a free floating underwater platform.
Highlights
The last few years have seen a multitude of object detection and recognition approaches appear in the literature
This paper addresses the problem of 3D object recognition in underwater environments
We provide a brief review of several local and especially global descriptors based on point clouds that relate to our work
Summary
The last few years have seen a multitude of object detection and recognition approaches appear in the literature. This development effort has been driven by the growing need to have autonomous systems that can interact with poorly structured, poorly organized and dynamic real-world situations. Significant progress has been made in object recognition for mobile robots over the last decade. An application scenario that achieved a promising degree of performance is the use of robots in kitchen environments [1,2,3]. Robots are able to identify everyday objects such as bowls, plates and cups using color and depth cameras, in order to locate and grasp them in an automated way. Stereo vision systems were been used for identifying and grasping objects [5,6,7], where the robots aimed to accurately localize parts of the object from images and determine the correct grasping points
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.