Abstract
In order to help robots understand and perceive an object's properties during noncontact robot-object interaction, this article proposes a deeply supervised subspace learning method. In contrast to previous work, it takes the advantages of low noise and fast response of noncontact sensors and extracts novel contactless feature information to retrieve cross-modal information, so as to estimate and infer material properties of known as well as unknown objects. Specifically, a depth-supervised subspace cross-modal material retrieval model is trained to learn a common low-dimensional feature representation to capture the clustering structure among different modal features of the same class of objects. Meanwhile, all of unknown objects are accurately perceived by an energy-based model, which forces an unlabeled novel object's features to be mapped beyond the common low-dimensional features. The experimental results show that our approach is effective in comparison with other advanced methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.