Abstract
Simultaneous Localization and Mapping (SLAM) is the process by which a mobile robot carrying specific sensors builds a map of the environment and at the same time uses this map to estimate its pose. Currently, SLAM has been proven its value and is a hot topic. However, challenges still exist: when the mobile robot stops in the process of motion and a large number of feature points in the environment move slightly, the feature matching process cannot eliminate the non-stationary feature point pairs. The introduction of a large number of outliers (non-stationary feature point pairs) seriously affects the observation process of SLAM. It directly leads to the estimation errors of mobile robot pose and the 3D features position, and further leads to keyframe trajectory drift. If there is a mechanism that allows the robot to accurately detect itself in a stop state, then the pose and map points could be locked, and the state variables could be optimized to make the system enter the positive succession. Therefore, detecting the stop status of the mobile robot is a significant work for SLAM. In this manuscript, an improved phase correlation method is proposed to solve the problem of stop detection for the autonomous driving vehicle in the dynamic street environment. After experiments, it is revealed that the stop detection is significant for the performance improvement to the state-of-the-art visual SLAM system, and the improved phase correlation method has higher stop detection accuracy than the conventional phase correlation in various scenarios.
Highlights
Simultaneous Localization and Mapping (SLAM) technology, as the main egomotion estimation algorithm for autonomous driving, has been concerned by major research institutions and industries around the world [1]–[3]
In order to rule out the impact of these dynamic correspondences on the state estimation when the mobile robot is in the stopping state, we address this problem from a new perspective
The texture-based method improves the accuracy significantly compared to the edge-based method in various scenes, but its computation time is very high and this method cannot be realized in real time
Summary
Simultaneous Localization and Mapping (SLAM) technology, as the main egomotion estimation algorithm for autonomous driving, has been concerned by major research institutions and industries around the world [1]–[3]. The existing phase correlation algorithms cannot be directly used for stop detection For resolving these problems, we propose using VP for three-dimensional image segmentation. Different projective transformations are applied to rectify each local sub-image, and the result will be three new synthesized images in which the objects in the plane are shown with their correct geometric shapes. A. VANISHING POINT DETECTION It is assumed that the moving direction of the vehicle is parallel to the lane markings and road boundaries, and the optical flow lines converge to the VP. Regardless of the direction of the optical flow vectors, we treat them as line segments to vote for the VP, which are denoted as = {o1, o2, · · · , oM }, and shown in Fig. 2 (e). The extracted parts are shown in the green trapezoidal parts of Fig. 3 (b)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.