Abstract
Based on the 3D Reduced Inertial Sensor System (3D-RISS) and the Machine Learning Enhanced Visual Data (MLEVD), an integrated vehicle navigation system is proposed in this paper. In demanding conditions such as outdoor satellite signal interference and indoor navigation, this work incorporates vehicle smooth navigation. Firstly, a landmark is set up and both of its size and position are accurately measured. Secondly, the image with the landmark information is captured quickly by using the machine learning. Thirdly, the template matching method and the Extended Kalman Filter (EKF) are then used to correct the errors of the Inertial Navigation System (INS), which employs the 3D-RISS to reduce the overall cost and ensuring the vehicular positioning accuracy simultaneously. Finally, both outdoor and indoor experiments are conducted to verify the performance of the 3D-RISS/MLEVD integrated navigation technology. Results reveal that the proposed method can effectively reduce the accumulated error of the INS with time while maintaining the positioning error within a few meters.
Highlights
Navigation technology is becoming increasingly important in different aspects of daily life.Positioning and navigation technologies for outdoor environments has dramatically developed, a typical example is the Global Positioning System (GPS) [1,2]
The outdoor experiment involves the integration of the inertial navigation with Machine Learning Enhanced Visual Data (MLEVD)
Because of the arithmetic of 3D Reduced Inertial Sensor System (3D-RISS), only 1 gyroscope and 2 accelerometers of FFG-16 were used, and their parameters are summarized in GNSS SoC developed by Unicore Communications
Summary
Navigation technology is becoming increasingly important in different aspects of daily life. There are some works serve for robot navigation based on deep learning [17,18,19] They apply the neural network process raw sensors’ data to complete the navigation and obstacle avoidance tasks. Tai et al [18] proposed an indoor obstacle avoidance solution based on deep network and sensors data Their robot decisions show a high similarity with human decisions. Zhu et al [19] presented an indoor navigation method based on visual input Their robot can successfully search for the given target and learn the relationships between actions and environment. The EKF method is utilized for the fusion of the location signals from both the neural network model and the inertial sensors to calculate the current position of the vehicle. Results show that the proposed approach could effectively reduce the accumulated error of the INS with time while maintaining the positioning error within a few meters for outdoor experiments and less than 1 m for indoor experiments
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.