Abstract

We present a novel technique for localizing a polyhedral object in a robot hand by integrating visual and tactile data. Localization is performed by matching a hybrid set of visual and tactile features with corresponding model features. The matching process first determines a subset of the object's six degrees of freedom (DOFs) using the tactile feature. The remaining DOFs, which cannot be determined from the tactile feature, are then obtained by matching the visual feature. A couple of touch and vision/touch-based filtering techniques are developed to reduce the number of model feature sets that are actually matched with a given scene set. We demonstrate the performance of the technique using simulated and real data. In particular, we show its superiority over vision-based localization in the following aspects: (1) capability of determining the object pose under heavy occlusion, (2) number of generated pose hypotheses, and (3) accuracy of estimating the object depth.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call