Abstract

AbstractA robot intended to monitor human behavior must account for the user’s reactions to minimize his/her perceived discomfort. The possibility of learning user interaction preferences and changing the robot’s behavior accordingly may positively impact the perceived quality of the interaction with the robot. The robot should approach the user without causing any discomfort or interference. In this work, we contribute and implement a novel Reinforcement Learning (RL) approach for robot navigation toward a human user. Our implementation is a proof-of-concept that uses data gathered from real-world experiments to show that our algorithm works on the kind of data that it would run on in a realistic scenario. To the best of our knowledge, our work is one of the first attempts to provide an adaptive navigation algorithm that uses RL to account for non-deterministic phenomena.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.