Abstract

AbstractAccurate and robust positioning and mapping are the core functions of autonomous mobile robots, and the ability to analyse and understand scenes is also an important criterion for the intelligence of autonomous mobile robots. In the outdoor environment, most robots rely on GPS positioning. When the signal is weak, the positioning error will interfere with the mapping results, making the semantic map construction less robust. This research mainly designs a semantic map construction system that does not rely on GPS signals for large outdoor scenes. It mainly designs a feature extraction scheme based on the sampling characteristics of Livox‐AVIA solid‐state LiDAR. The factor graph optimisation model of frame pose and inertial measurement unit (IMU) pre‐integrated pose, using a sliding window to fuse solid‐state LiDAR and IMU data, fuse laser inertial odometry and camera target detection results, refer to the closest point distance and curvature for semantic information. The point cloud is used for semantic segmentation to realise the construction of a 3D semantic map in outdoor scenes. The experiment verifies that laser inertial navigation odometry based on factor map optimisation has better positioning accuracy and lower overall cumulative error at turning, and the 3D semantic map obtained on this basis performs well.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call