Abstract

Surveillance and ground target tracking using multiple electro-optical and infrared video sensors onboard unmanned aerial vehicles (UAVs) have drawn a great deal of interest in recent years due to inexpensive video sensors and sensor platforms. In this paper, we compare the convex combination fusion algorithm with the centralized fusion algorithm using a single target and two UAVs. The local tracker for each UAV processes pixel location measurements in the digital image corresponding to the target location on the ground. The video measurement model is based on the perspective transformation and therefore is a nonlinear function of the target position. The measurement model also includes the radial and tangential lens distortions. Each local tracker and the central tracker use an extended Kalman filter with the nearly constant velocity dynamic model. We present numerical results using simulated data from two UAVs with varying levels of process noise power spectral density and pixel location standard deviations. Our results show that the two fusion algorithms are unbiased and the mean square error (MSE) of the convex combination fusion algorithm is close to the MSE of the centralized fusion algorithm. The covariance calculated by the centralized fusion algorithm is close to the MSE and is consistent for most measurement times. However, the covariance calculated by the convex combination fusion algorithm is lower than the MSE due to neglect of the common process noise and is not consistent with the estimation errors.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call