Abstract

Recent advancements in sensors and deep learning techniques have improved reliability of robotic perceptual systems, but current systems are not robust enough for real-world challenges such as occlusions and sensing uncertainties in cluttered scenes. To overcome these issues, active or interactive perception actions are often necessary, such as sensor repositioning or object manipulation to reveal more information about the scene. Existing perception systems lack a comprehensive approach that incorporates both active and interactive action spaces, thereby limiting the robot’s perception capabilities. Moreover, these systems focus on exploring a single object or scene, without utilizing object information to guide the exploration of multiple objects. In this work, we propose an object-aware hybrid perception system that selects the next best action by considering both active and interactive action spaces and enhances the selection process with an object-aware approach to guide the cognitive robot operating in tabletop scenarios. Novel volumetric utility metrics are used to evaluate actions that include positioning sensors from a heterogeneous set or manipulating objects to gain a better perspective of the scene. The proposed system maintains the volumetric information of the scene that includes semantic information about objects, enabling it to exploit object information, associate occlusion with corresponding objects, and make informed decisions about object manipulation. We evaluate the performance of our system both in simulated and real-world experiments using a Baxter robotic platform equipped with two arms, RGB and depth cameras. Our experimental results show that the proposed system outperforms the compared state-of-the-art methods in the given scenarios, achieving an 11.2% performance increase.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call