Abstract

Our work focuses on robots to be deployed in human environments. These robots, which will need specialized object manipulation skills, should leverage end-users to efficiently learn the affordances of objects in their environment. This approach is promising because people naturally focus on showing salient aspects of the objects [1]. We replicate prior results and build on them to create a combination of self and supervised learning. We present experimental results with a robot learning 5 affordances on 4 objects using 1219 interactions. We compare three conditions: (1) learning through self-exploration, (2) learning from supervised examples provided by 10 naive users, and (3) self-exploration biased by the user input. Our results characterize the benefits of self and supervised affordance learning and show that a combined approach is the most efficient and successful.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.