Abstract

While a global positioning system (GPS) is the most widely used sensor modality for aircraft navigation, researchers have been motivated to investigate other navigational sensor modalities because of the desire to operate in GPS denied environments. Due to advances in computer vision and control theory, monocular camera systems have received growing interest as an alternative/collaborative sensor to GPS systems. Cameras can act as navigational sensors by detecting and tracking feature points in an image. One limiting factor in this method is the current inability to relate feature points as they enter and leave the camera field of view. This paper continues research efforts to provide a vision- based position estimation method for aircraft guidance. A recently developed estimation method is integrated with a new, nonlinear flight model of a aircraft. The vision-based estimation scheme provides input directly to the vehicle guidance system and autopilot.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call