Abstract

We present a novel technique for localizing a polyhedral object in a robot hand by integrating visual and tactile data. Localization is performed by matching a hybrid set of visual and tactile features with corresponding model features. The matching process first determines a subset of the object's six degrees of freedom (DOFs) using the tactile feature. The remaining DOFs, which cannot be determined from the tactile feature, are then obtained by matching the visual feature. A couple of touch and vision/touch-based filtering techniques are developed to reduce the number of model feature sets that are actually matched with a given scene set. We demonstrate the performance of the technique using simulated and real data. In particular, we show its superiority over vision-based localization in the following aspects: (1) capability of determining the object pose under heavy occlusion, (2) number of generated pose hypotheses, and (3) accuracy of estimating the object depth.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.