Abstract
In this paper the concept of force and position hybrid control proposed by Raibert et al. is adapted in order to incorporate a changeable constraint coordinate frame, on which the controller is implemented. An online method for generating this constraint frame based on visual and force sensor data fusion is proposed. The basic idea here is to complement a partially reconstructed trajectory (a 2D information) drawn on the object surface (with unknown shape), visualized by one CCD camera installed on the end-effector, with the tactile information provided by a force sensor installed on the same end-effector. This is achieved by identifying the normal and tangential directions of the constraint surface based on the force sensor information, then estimating the constraint surface for the next desired point and finally projecting back on this surface the desired point based on the 2D image grabbed from the CCD camera. This method is experimentally verified using a 6-DOF direct-drive (DD) robot.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.