Abstract

Obstacle avoidance for mobile robot to reach the desired target from a start location is one of the most interesting research topics. However, until now, few works discuss about working of mobile robot in the dynamic and continuously changing environment. So, this issue is still the research challenge for mobile robots. Traditional algorithm for obstacle avoidance in the dynamic, complex environment had many drawbacks. As known that Q-learning, the type of reinforcement learning, has been successfully applied in computer games. However, it is still rarely used in real world applications. This research presents an effectively method for real time dynamic obstacle avoidance based on Q-learning in the real world by using three-wheeled mobile robot. The position of obstacles including many static and dynamic obstacles and the mobile robot are recognized by fixed camera installed above the working space. The input for the robot is the 2D data from the camera. The output is an action for the robot (velocities, linear and angular parameters). Firstly, the simulation is performed for Q-learning algorithm then based on trained data, The Q-table value is implemented to the real mobile robot to perform the task in the real scene. The results are compared with intelligent control method for both static and dynamic obstacles cases. Through implement experiments, the results show that, after training in dynamic environments and testing in a new environment, the mobile robot is able to reach the target position successfully and have better performance comparing with fuzzy controller.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call