Abstract

Navigation has traditionally served the purpose of determining one's position, locating destinations, and charting a course towards them. It furnishes accurate details about the whereabouts of specific places or objects. Despite numerous advancements and enhancements in navigation technology, there have been ongoing discussions about its potential for autonomy. This suggests a scenario where navigation operates independently, without human intervention. Devices equipped with this capability comprehend their destination and chart the most efficient route to reach it. A crucial concept in this context is Visual Odometry (VO), which calculates the relative position between successive image frames. Likewise, the positioning of mobile robots relies on similar principles. However, a significant challenge arises over time as VO is susceptible to accumulating errors, known as drift. The Inertial Measurement Unit (IMU), which consists of accelerometers, gyroscopes, and magnetometers, is added to counteract this. These elements provide data that is more accurate and helps reduce noise. The integration of IMU with VO results in the creation of Visual Inertial Odometry (VIO). Furthermore, combining VIO with Global Positioning System (GPS) data through an Extended Kalman Filter (EKF) enhances localization accuracy both locally and globally. Additionally, stereo disparity estimation is employed to generate a depth perception map for obstacle detection, converted into a 2D grid map of occupancy after. While a waypoint follower directs the robot or devices toward its intended objective, local route planning algorithms create interim waypoints to prevent obstructions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call