Abstract

A field programmable gate array (FPGA) system implementation capable of being mounted onboard a micro aerial vehicle (MAV) (less than 5 pounds) that can perform the processing tasks necessary to identify and track a marked target landing site in real-time is presented. This implementation was designed to be an image processing subsystem that is mounted on a MAV to assist an autopilot system with vision-related tasks. This paper describes the FPGA vision system architecture and algorithms implemented to segment and locate a colored cloth target that specifies the exact landing location. Once the target landing site is identified, the exact location of the landing site is transmitted to the autopilot, which then implements the trajectory adjustments required to autonomously land the MAV on the target. Results of two flight test situations are presented. In the first situation, the MAV lands on a static target. The second situation includes a moving target, which in our tests was the back of a moving vehicle. This FPGA system is an application-specific configuration of the helios robotic vision platform developed at Brigham Young University.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call