Abstract
In shared autonomy, a robot and human user both have some level of control in order to achieve a shared goal. Choosing the balance of control given to the user and the robot can be a challenging problem since different users have different preferences and vary in skill levels when operating a robot. We propose using a novel formulation of Partially Observable Markov Decision Process (POMDP) to represent a model of the user's expertise in controlling the robot. The POMDP uses observations from the user's actions and from the environment to update the belief of the user's skill and chooses a level of control between the robot and the user. The level of control given between the user and the robot is encapsulated in macro-action controllers. A user study was run to test the performance of our formulation. Users drive a simulated robot through an obstacle-filled map while the POMDP model chooses appropriate macro-action controllers based on the belief state of the user's skill level. The results of the user study show that our model can encapsulate user skill. The results also show that using the controller with greater robot autonomy helped users of low skill avoid obstacles more than it helped users of high skill.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.