Abstract
Nowadays, unmanned aerial vehicles (UAVs) play a vital role in many applications and have been successfully employed in a wide variety of applications such as road surveying, precision agriculture, landslide monitoring, cultural heritage mapping, and pipeline monitoring. All of these applications require accurate and stable navigation system. Currently, most of the commercially available UAVs rely on the integration of the Global Satellite Navigation Systems (GNSS) and Inertial Navigation Systems (INS) to estimate the position, velocity, and attitude. The small form factor of modern UAVs allowed them to operate in more challenging environments such as urban and natural canyons. In these environments, the GNSS availability cannot be guaranteed, and hence the navigation solution will deteriorate dramatically during these GNSS signal outages due to the drift exhibited by the inertial navigation solution. Using other aiding sensors is crucial to limit the accumulated errors associated with INS measurements. Onboard cameras can offer a useful clue to support the navigation solution during the GNSS signal outage periods. Varieties of monocular visual odometry based on photogrammetric and Structure from Motion (SfM) approaches have been proposed to assist the navigation estimation process. The main problem of using the monocular VO technique is the loss of scale if neither external measurement nor a priori knowledge about the surrounding environment are available. Moreover, the estimated camera pose from VO is prone to drift with time. This paper introduces a novel approach for estimating the navigation states of an Unmanned Aerial Vehicle (UAV) by integrating the visual information that obtained from a monocular camera with the Inertial Measurement Unit (IMU) observations via an Extended Kalman Filter (EKF). Most of the current monocular VO algorithms rely on a calibrated camera model and apply the conventional photogrammetric and SfM approaches. While these approaches can help towards estimating the relative rotation and translation by tracking image features and applying geometrical constraints, they cannot estimate the motion scale using only the image visual features.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.