Autonomous mobile robot navigation in real unmodified outdoor areas frequented by people on their business, children playing, fast running bicycles, and even robots, remains a difficult challenge. For eleven years, the Tsukuba Challenge Real World Robot Challenge (RWRC) has brought together robots, researchers, companies, government, and ordinary citizens, under the same outdoor space to push forward the limits of autonomous mobile robots. For the Tsukuba Challenge 2017 participation, our team proposed to study the problem of sensors-to-actuators navigation (also called End-to-End), this is, having the robot to navigate towards the destination on a complex path, not only moving straight but also turning at intersections. End-to-End (E2E) navigation was implemented using a convolutional neural network (CNN): the robot learns how to go straight, turn left, and turn right, using camera images and trajectory data. E2E network training and evaluation was performed at Nagoya University, on similar outdoor conditions to that of Tsukuba Challenge 2017 (TC2017). Even thought E2E was trained on a different environment and conditions, the robot successfully followed the designated trajectory in the TC2017 course. Learning how to follow the road no matter the environment is of the key attributes of E2E based navigation. Our E2E does not perform obstacle avoidance and can be affected by illumination and seasonal changes. Therefore, to improve safety and add fault tolerance measures, we developed an E2E navigation approach with model-based system as backup. The model-based system is based on our open source autonomous vehicle software adapted to use on a mobile robot. In this work we describe our approach, implementation, experiences and main contributions.
Read full abstract