Abstract

Learning behaviors in real environments is an important task that real robots can acquire. Many researchers in the field of machine learning have been concerned with behavior based learning tasks. However they usually have shown computer simulations where sensory models are too idealized and often do not hold in real robot environments. We describe how a real robot learns to shoot a ball into a goal by vision-based reinforcement learning. The use of a visual sensor is very useful for a goal-directed task because it is able to capture images of distant goals. We have divided the considered task into two separate behaviors: the ball-approaching behavior and the shooting behavior. Using only the image captured by the camera placed on our real robot we construct the state spaces for both behaviors. An action space is also constructed therefore the reinforcement learning method is applied. We use the Q-learning algorithm, a widely used reinforcement learning method. The promising results obtained using the Khepera miniature robot are shown.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call