Abstract

Robots operating in household environments should detect and estimate the properties of articulated objects to efficiently perform tasks given by human operators. This paper presents the design and implementation of a system for estimating a point-cloud-based model of a scene, enhanced with information about articulated objects, based on a single RGB-D image. This article describes the neural method used to detect handles, detect and extract fronts, detect rotational joints, and build a point cloud model. It compares various architectures of neural networks to detect handles, the fronts of drawers and cabinets, and estimate rotational joints. In the end, the results are merged to build a 3D model of articulated objects in the environment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call