One of the primary goals of the airborne vehicle detection system is to reduce the risks of incident collisions and to relieve traffic jam caused by the increasing number of vehicles. Different from the stationary systems, which are usually fixed on buildings, the airborne systems in unmanned aircrafts or satellites take the advantages of wider view angle and higher mobility. However, detecting vehicles in airborne videos is a challenging task because of the scene complexity and platform movement. The direct application of the traditional image processing techniques to the problem may result in low detection rate or cannot meet the requirements of real-time applications. To address these problems, a new and efficient method composed by two stages, attention focus extraction and vehicle classification is proposed in this paper. Our work makes two key contributions. The first is the introduction of a new attention focus extraction algorithm, which can quickly detect the candidate vehicle regions to make the algorithm focus on much smaller regions for faster computation. The second contribution is a simple and efficient classification process, which is built using the AdaBoost learning algorithm. The classification process, which is a hierarchical structure, is designed to obtain a lower false alarm rate by looking for vehicles in the candidate regions. Experimental results demonstrate that, compared with other representative algorithms, our method can obtain better performance in terms of higher detection rate and lower false positive rate, while meeting the needs of real-time application.
Read full abstract