Abstract

The goal of the DARPA Robotics Challenge (DRC) is the development of ground robots capable of executing complex tasks in disaster relief environments. The Virtual Robotics Challenge (VRC) was the first phase of the competition, in which teams develop their own software to maneuver a simulated robot through a virtual obstacle course and perform a set of complex tasks. In these scenarios, grasping and manipulation tasks are required; therefore this paper provides a description of the visual 3-Dimensional (3D) perception functionality to grasp a hose lying on a table, one of the tasks of the challenge. The sensor head of the used Atlas robot is equipped with a laser range scanner and a stereo camera, which provide the 3D point cloud to be processed. The presented approach processes the point cloud; a plane detector is used to segment the scene and a tabletop assumption is then considered to detect the objects of interest. The hose connector is recognized using color-based region growing segmentation and then its pose is estimated by a cylinder model fitting. The experimental results demonstrate the efficiency of the proposed 3D perception pipeline in the scenario under study.KeywordsPoint CloudHumanoid RobotStereo CameraManipulation TaskSensor HeadThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.