Abstract. In the process of autonomous navigation of outdoor mobile robots, four modules are involved: perception, localization, planning, and controlling. The perception module utilizes sensors such as cameras and radars based on various principles to analyze the robot's surroundings in real-time. The localization module uses GPS (Global Positioning System), IMU (Inertial Measurement Unit), and prior maps for real-time positioning analysis. The planning module plans optimal path based on the outputs of the first two modules, and the controlling module directs the robot's chassis to move along the planned optimal path. For the localization module, the accuracy of GPS positioning results heavily depends on weather conditions and GPS signal receptions. Even if the positioning results of imu are integrated, the positioning accuracy still cannot meet the needs of robot navigation. Therefore, using prior maps for repositioning can compensate for this accuracy deficiency. The planning module also requires path planning based on prior maps. If the a priori map storage is large, it will lead to difficulties in usage, maintenance, and updates. Therefore, it is crucial to research lightweight navigation map mapping methods. In this paper, an automatically mapping method of lightweight navigation maps is proposed, combining cameras and LiDAR (Light Detection and Ranging), including semantic informations necessary for outdoor navigation positioning, such as pole-like objects and traffic signs for robot longitudinal positioning, and lane line elements for robot lateral positioning. This method automatic generates robot navigation maps in Lanelet2 format, providing support for subsequent positioning and path planning modules.
Read full abstract