Abstract

We present a technique to localize polyhedral objects by integrating visual and tactile data. This technique is useful in tasks such as localizing an object in a robot hand. It is assumed that visual data are provided by a monocular visual sensor, while tactile data by a planar-array tactile sensor in contact with the object. Visual data are used to generate a set of hypotheses about the 3D object's pose, while tactile data to assist in verifying the visually-generated pose hypotheses. We specifically focus on using tactile data in hypothesis verification. A set of indexed bounds on the object's six transformation parameters are constructed from the tactile data. These indexed bounds are constructed off-line by expressing them with respect to a tactile-array frame. At run-time, each visually-generated hypothesis is efficiently compared with the touch-based bounds to determine whether to eliminate the hypothesis, or to consider it for further verification. The proposed technique is tested using simulated and real data. >

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call