Abstract
The main task while developing a mobile robot is to achieve accurate and robust navigation in a given environment. To achieve such a goal, the ability of the robot to localize itself is crucial. In outdoor, namely agricultural environments, this task becomes a real challenge because odometry is not always usable and global navigation satellite systems (GNSS) signals are blocked or significantly degraded. To answer this challenge, this work presents a solution for outdoor localization based on an omnidirectional visual odometry technique fused with a gyroscope and a low cost planar light detection and ranging (LIDAR), that is optimized to run in a low cost graphical processing unit (GPU). This solution, named FAST-FUSION, proposes to the scientific community three core contributions. The first contribution is an extension to the state-of-the-art monocular visual odometry (Libviso2) to work with omnidirectional cameras and single axis gyro to increase the system accuracy. The second contribution, it is an algorithm that considers low cost LIDAR data to estimate the motion scale and solve the limitations of monocular visual odometer systems. Finally, we propose an heterogeneous computing optimization that considers a Raspberry Pi GPU to improve the visual odometry runtime performance in low cost platforms. To test and evaluate FAST-FUSION, we created three open-source datasets in an outdoor environment. Results shows that FAST-FUSION is acceptable to run in real-time in low cost hardware and that outperforms the original Libviso2 approach in terms of time performance and motion estimation accuracy.
Highlights
The main task while developing a mobile robot is to achieve secure and robust navigation in a given environment
Both approaches revealed error, it is visible that FAST-FUSION estimated a scale factor much closer to the ground-truth (Figure 10d). It provided a more accurate estimation. This being said, we present a version of a state-of-the-art monocular visual odometry (VO) method that works with omnidirectional cameras, fused with sensors, and optimized in graphical processing unit (GPU), providing higher accuracy than the original one
We proposed a real-time embedded localization system for ground robots called FAST-FUSION
Summary
The main task while developing a mobile robot is to achieve secure and robust navigation in a given environment. The environment defines the type of navigation, i.e., a mobile robot can perform indoor, outdoor in a structured environment or outdoor in an unstructured environment navigation [1] To achieve such a goal, the ability of the robot to localize itself is crucial. The higher density of moving objects, the terrain irregularities, and the characteristics of illumination that are present in an outdoor environment make robot motion estimation difficult and, the process of localizing it [2]. In such conditions, sensors like inertial measurement units (IMU) or encoders tend to present considerable errors. In confined environments such as tunnels, urban canyons, or steep
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.