The localization system is the most important part of the overall drone navigation system. The Global Positioning System (GPS) or Global Navigation Satellite System (GNSS) is the main device commonly used in a drone. However, under certain conditions, GPS or GNSS may not function optimally, such as in situations of signal jamming or enclosed environments. This paper implemented a new approach to address this issue by combining GNSS data with Visual Odometry (VO) through Machine Learning (ML) methods. The followed process consists of three main stages. First, performing speed and orientation estimation using VO. Second, performing left and right feature separation on the images to generate a more stable and robust estimation of speed and rotation. Third, refining speed and orientation estimation by integrating GNSS data through ML-based data fusion. The proposed method strives to enhance drone localization accuracy, despite disruptions or unavailability of GNSS signals. The research results indicate that the introduced method significantly reduces Absolute Translation Error (ATE) compared to utilizing VO or GNSS separately. The average ATE produced reached 4.38 m and an orientation of 8.26°, indicating that this data fusion approach provides a significant improvement in drone localization accuracy, making it reliable in operational scenarios with limited GNSS signals.
Read full abstract