Abstract

In this paper the concept of force and position hybrid control proposed by Raibert et al. is adapted in order to incorporate a changeable constraint coordinate frame, on which the controller is implemented. An online method for generating this constraint frame based on visual and force sensor data fusion is proposed. The basic idea here is to complement a partially reconstructed trajectory (a 2D information) drawn on the object surface (with unknown shape), visualized by one CCD camera installed on the end-effector, with the tactile information provided by a force sensor installed on the same end-effector. This is achieved by identifying the normal and tangential directions of the constraint surface based on the force sensor information, then estimating the constraint surface for the next desired point and finally projecting back on this surface the desired point based on the 2D image grabbed from the CCD camera. This method is experimentally verified using a 6-DOF direct-drive (DD) robot.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call