Abstract

This letter proposes a framework for an autonomous humanoid robot, aimed at searching for a target object in an unknown environment using 3D-simultaneous localization and mapping (SLAM). The robot determines, while walking, the next viewpoint from an environment map and aggregated object recognition results, and automatically finds and grasps the target object. Whereas most robot exploration studies require a static map, hints regarding object position, area size limitations, or offline viewpoint planning time for each observation, our system can globally find an occluded object in an unknown environment, based only on the 3D target model. The biggest novelty of this research is that this framework always runs its viewpoint planner in background and immediately updates the destination if the camera gets environment/object information. To follow that goal change quickly, the humanoid robot re-plans its footstep trajectory without stopping, using foot landing estimation based on 3D-SLAM's localization. Notably, our robot can predict an unobserved area, and actively reveal it while avoiding obstacles. We validated the efficacy of this method through real experiments with an “HRP2-KAI” in several environments, and achieved fully automated searching and grasping.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.