Abstract
The goal of our work is to acquire an internal model through a robot's experience. The internal model has the ability for mutual conversion between motor commands and movement of the body (e.g. hand) in view. Unlike other works, which assume the robot's body to be extracted in its view, we assume that external moving objects are also included in its view. We introduce predictability as a measure to segregate such objects from the robot's body: the robot's body is predictable while moving objects are not. Prediction is conducted using a neuro-dynamical system called the multiple timescales recurrent neural network (MTRNN). The prediction results of the robot's body are compared with the actual motion to distinguish the robot's body from other objects. For evaluation, we conducted an experiment with the robot moving its hand while moving objects were in view. The results of the experiment showed that the prediction of the robot's hand is 3.86 times as accurate as that of others on average. These results show the effectiveness of using predictability as a measure to acquire an internal model in an environment that includes both a robot's body and other moving objects in view.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.