Abstract
An autonomous navigation scheme based on sequential images is presented for planetary landing in unknown environments. A lander is assumed to be equipped with only an inertial measurement unit and a monocular camera. The emphasis of the paper is on the ability of the proposed navigation method to estimate the lander’s states without any a priori knowledge of the environment or extra sensors. Assuming that the landing surface is in the local level plane, an implicit measurement model is derived from observations of features with unknown three-dimensional positions tracked in sequential images. The derived measurement model is fused with measurements from the inertial measurement unit using an extended Kalman filter. Finally, an observability analysis of the proposed navigation system is performed and yields the closed-form expression of the unobservable directions. Simulation results verify the observability analysis and show that all lander states can be estimated except horizontal position and global rotation about the gravity direction.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.