Abstract

Current autonomous unmanned aerial systems (UASs) commonly use vision-based landing solutions that depend upon fiducial markers to localize a static or mobile landing target relative to the UAS. This paper develops and demonstrates an alternative method to fiducial markers with a combination of neural-network-based object detection and camera intrinsic properties to localize an unmanned ground vehicle (UGV) and enable autonomous landing. Implementing this visual approach is challenging given the limited compute power on board the UAS, but it is relevant for autonomous landings on targets for which affixing a fiducial marker a priori is not possible or not practical. The position estimate of the UGV is used to formulate a landing trajectory that is then input to the flight controller. Algorithms are tailored toward low size, weight, and power constraints, as all compute and sensing components weigh less than 100 g. Landings were successfully demonstrated in both simulation and experimentally on a UGV traveling in both a straight line and while turning. Simulation landings were successful at UGV speeds of up to 3.0 m/s, and experimental landings at speeds up to 1.0 m/s.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call