Abstract

Humanoid robots often struggle with tasks such as lifting objects due to the complexity involved in identifying contact points, applying the correct force, and tracking task progress. We propose an integrated solution that leverages the dual-arm capability of humanoids and utilizes sensor fusion from vision and force sensors. Our system employs a computer vision algorithm to detect and characterize object properties (shape, size, position, orientation) and differentiate between parallel and non-parallel bi-manipulation tasks. The controller then identifies optimal contact points for the end effectors, generating trajectories fed into a closed-loop controller using force feedback. For parallel bi-manipulation, momentum cancellation is achieved through sensor fusion. For non-parallel surfaces, a reinforcement learning algorithm determines the appropriate lifting force to prevent slippage using only two contact points. Experimental validation on a real humanoid platform demonstrates the effectiveness of our approach in autonomously lifting objects, regardless of contact surface configuration. This advancement significantly enhances the reliability and versatility of humanoid robots in performing complex manipulation tasks, contributing to their practical deployment in human-oriented environments.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.