Abstract

In order to help robots understand and perceive an object's properties during noncontact robot-object interaction, this article proposes a deeply supervised subspace learning method. In contrast to previous work, it takes the advantages of low noise and fast response of noncontact sensors and extracts novel contactless feature information to retrieve cross-modal information, so as to estimate and infer material properties of known as well as unknown objects. Specifically, a depth-supervised subspace cross-modal material retrieval model is trained to learn a common low-dimensional feature representation to capture the clustering structure among different modal features of the same class of objects. Meanwhile, all of unknown objects are accurately perceived by an energy-based model, which forces an unlabeled novel object's features to be mapped beyond the common low-dimensional features. The experimental results show that our approach is effective in comparison with other advanced methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call