Abstract
This paper presents a visual-based approach that allows an Unmanned Aerial Vehicle (UAV) to detect and track a cooperative flying vehicle autonomously using a monocular camera. The algorithms are based on template matching and morphological filtering, thus being able to operate within a wide range of relative distances (i.e., from a few meters up to several tens of meters), while ensuring robustness against variations of illumination conditions, target scale and background. Furthermore, the image processing chain takes full advantage of navigation hints (i.e., relative positioning and own-ship attitude estimates) to improve the computational efficiency and optimize the trade-off between correct detections, false alarms and missed detections. Clearly, the required exchange of information is enabled by the cooperative nature of the formation through a reliable inter-vehicle data-link. Performance assessment is carried out by exploiting flight data collected during an ad hoc experimental campaign. The proposed approach is a key building block of cooperative architectures designed to improve UAV navigation performance either under nominal GNSS coverage or in GNSS-challenging environments.
Highlights
Machine vision systems and algorithms represent an essential tool for several applications involving the use of Unmanned Aerial Vehicles (UAVs) [1,2]
C n where Rnb is the rotation matrix representing the attitude of the tracker UAV in NED, while Rbc is the rotation matrix representing the attitude of the Camera Reference Frames (CRF) with respect to the Body Reference Frame (BRF)
An original approach to detect and track a cooperative, target UAV using a camera onboard a tracker UAV is presented in this paper
Summary
Machine vision systems and algorithms represent an essential tool for several applications involving the use of Unmanned Aerial Vehicles (UAVs) [1,2]. The concept can be scaled for multiple UAVs, and in that case, a centralized networking architecture is needed where multiple target UAVs transmit their navigation data to the tracker UAV, and the algorithmic architecture presented in this paper must be run on board the tracker for each target Overall, in such scenarios, the main constraints of the proposed approach are the need to (1) ensure pixel-level estimation accuracy of the target line-of-sight, (2) optimize the trade-off between missed detections and false alarms, keeping the latter at a minimum, and (3) operate in a wide range of distances when the target UAV can occupy either a few pixels or much larger regions of interest in the focal plane, being robust with respect to the possibility of abrupt changes in illumination and background conditions.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.