Abstract
This paper presents an approach to robot-assisted navigation on a mobility assistance context, by learning from the user while helping him navigate efficiently and safely in complex environments. Assistive robots such as robotic walkers provide the ability to support a user's body weight on the upper limbs while walking. However, walkers can add an extra layer of distress due to their specific manipulation constraints. For the users of such devices, lack of dexterous upper limb control can be a considerable problem as it means they may be unable to operate these devices efficiently; users may also have visual impairments that reduce their navigational efficiency. The proposed approach uses a Reinforcement Learning (RL) model and a dynamic window-based local motion planning algorithm. It aims to learn needed corrections in the motion command based on the surrounding environment and the user's intent. The proposed solution handles corrections and guides the user through the environment without collisions while learning and aiding the user when he is unable to operate the device efficiently. The RL based approach was tested in indoor scenarios with a robotic walker platform showing preliminary promising results.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.