Abstract

The navigation systems of autonomous aircraft rely on the readings provided by a suite of onboard sensors to estimate the aircraft state. In the case of fixed wing vehicles, the sensor suite is usually composed by triads of accelerometers, gyroscopes, and magnetometers, a Global Navigation Satellite System (GNSS) receiver, and an air data system (Pitot tube, air vanes, thermometer, and barometer), and it is often complemented by one or more digital cameras. An accurate representation of the behavior and error sources of each of these sensors, together with the images generated by the cameras, is indispensable for the design, development, and testing of inertial, visual, or visual–inertial navigation algorithms. This article presents realistic and customizable models for each of these sensors; a ready-to-use C++ implementation is released as open-source code so non-experts in the field can easily generate realistic results. The pseudo-random models provide a time-stamped series of the errors generated by each sensor based on performance values and operating frequencies obtainable from the sensor’s data sheets. If in addition, the simulated true pose (position plus attitude) of the aircraft is provided, the camera model generates realistic images of the Earth’s surface that resemble those taken with a real camera from the same pose.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call