Abstract

Assistive Robotic Manipulators (ARMs) play an important role for people with upper-limb disabilities and the elderly by helping them complete Activities of Daily Living (ADLs). However, as the objects to handle in ADLs differ in size, shape and manipulation constraints, many two or three fingered end-effectors of ARMs have difficulty robustly interacting with these objects. In this paper, we propose vision-based control of a 5-fingered manipulator (Schunk SVH), automatically changing its approach based on object classification using computer vision combined with deep learning. The control method is tested in a simulated environment and achieves a more robust grasp with the properly shaped five-fingered hand than with a comparable three-fingered gripper (Barrett Hand) using the same control sequence. In addition, the final optimal grasp pose (x, y, and θ) is learned through a deep regressor in the penultimate stage of the grasp. This method correctly identifies the optimal grasp pose in 78.35% of cases when considering all three parameters for an object included in the training set, but in a different setting than that of the training set.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.