Abstract
Recently, many artificial intelligence applications in smart cars have been utilized in real life. Making an unmanned ground vehicle (UGV) capable of moving autonomously has become a critical topic. Hence, in this work, a novel method for a UGV to realize path planning and obstacle avoidance is presented using a deep deterministic policy gradient approach (DDPG). More specifically, the lidar sensor mounted on the vehicle is utilized to measure the distance between the vehicle and the surrounding obstacles, and the odometer measures the mileage of the vehicle for the purpose of estimating the current location. Then, the above sensed data are treated as the training data for the DDPG training procedure, and several experiments are performed in different settings utilizing the robot operating system (ROS) and the Gazebo simulator with a real robot module, TurtleBot3, to present a comprehensive discussion. The simulation results show that using the presented design and reward architecture, the DDPG method is better than the classic deep Q-network (DQN) method, e.g., taking fewer steps to reach the goal, less training time to find the smallest number of steps for reaching the goal, and so on.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.