Abstract

Pose estimation for small unmanned aerial vehicles has made large improvements in recent years, leading to vehicles that use a suite of sensors to navigate and explore various environments. In particular, cameras have become popular due to their low weight and power consumption, as well as the large amount of data they capture. However, processing this data to extract useful information has proved challenging, as the pose estimation problem is inherently nonlinear and, depending on the cameras' field of view, potentially ill-posed. Results from the field of multi-camera egomotion estimation show that these issues can be reduced or eliminated by using multiple cameras positioned appropriately. In this work, we make use of these insights to develop a multi-camera visual pose estimator using ultra wide angle fisheye cameras, leading to a system that has many advantages over traditional visual pose estimators. The system is tested in a variety of configurations and flight scenarios on an unprepared urban rooftop, including landings and takeoffs. To our knowledge, this is the first time a visual pose estimator has been shown to be able to continuously track the pose of a small aerial vehicle throughout the landing and subsequent takeoff maneuvers.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.