Abstract

In many applications of airborne visual techniques for unmanned aerial vehicles (UAVs), lightweight sensors and efficient visual positioning and tracking algorithms are essential in a GNSS-denied environment. Meanwhile, many tasks require the ability of recognition, localization, avoiding, or flying pass through these dynamic obstacles. In this paper, for a small UAV equipped with a lightweight monocular sensor, a single-frame parallel-features positioning method (SPPM) is proposed and verified for a real-time dynamic target tracking and ingressing problem. The solution is featured with systematic modeling of the geometric characteristics of moving targets, and the introduction of numeric iteration algorithms to estimate the geometric center of moving targets. The geometric constraint relationships of the target feature points are modeled as non-linear equations for scale estimation. Experiments show that the root mean square error percentage of static target tracking is less than 1.03% and the root mean square error of dynamic target tracking is less than 7.92 cm. Comprehensive indoor flight experiments are conducted to show the real-time convergence of the algorithm, the effectiveness of the solution in locating and tracking a moving target, and the excellent robustness to measurement noises.

Highlights

  • With the rapid development of unmanned aerial vehicles (UAVs) system technologies and the increasing demands for UAVs to perform various aerial tasks [1,2,3,4], UAVs employ airborne vision sensors to achieve the goal of autonomous target tracking and positioning [5,6,7,8]

  • The visual sensor-based methods for UAV target tracking can be generally divided into three types

  • Experimental results show that single-frame parallel-features positioning method (SPPM) is robust to 2D detection errors and in the presence of detection noise; Based on the SPPM, a 2D feature recognition algorithm for parallel feature extraction was designed

Read more

Summary

Introduction

With the rapid development of UAV system technologies and the increasing demands for UAVs to perform various aerial tasks [1,2,3,4], UAVs employ airborne vision sensors to achieve the goal of autonomous target tracking and positioning [5,6,7,8]. The visual sensor-based methods for UAV target tracking can be generally divided into three types. The positioning accuracy of this type may be satisfactory, it depends on the capability of larger sensor load and high-cost computation unit (to ensure the real-time performance) and, is not suitable for the applications to low-cost small UAVs. The second type is to use various visual features of the target image to carry out affine transformation and feature points matching, and construct motion constraint equations and bundle iterative optimization [19]. It is worth noting that the SLAM scheme mainly uses static features to realize the positioning of the moving platform itself, and the tracking iteration speed for dynamic feature points is relatively slow. The world coordinate system is often used as the benchmark coordinate system to describe the spatial position of the target we are tracking, which is expressed by

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call