Abstract
In order for mobile robots to move toward a destination autonomously while avoiding obstacles, motion planning is an essential capability. We have thus far proposed motion planners based on deep neural networks, DNNs, for a robot equipped with a 2D LiDAR. A multilayer perceptron, MLP, was used as the DNN. In this paper, convolutional neural network, CNN, is also used as the DNN. Policies of these so-called end-to-end motion planners represented by MLP and CNN are trained through imitation learning. However, imitation learning sometimes causes a problem of generalization ability of the motion planners. In consequence, it might be difficult for a robot to plan suitable motions in unknown environments. For this challenge, we introduce an auxiliary task into the output layer in addition to the main task to determine the linear and angular velocities as the motion output. As the auxiliary task, the motion planners estimate the destination angle. Through multi-task learning, the accuracy of the main task is increased by the auxiliary task. In the navigation experiments, we show that MLP is more effective than CNN for improving the generalization ability of the motion planners. Finally, the robot based on the motion planner with MLP successfully moves toward a destination while avoiding obstacles even in unknown environment.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.