Abstract

It is intractable for infrared (IR) search and tracking systems to detect small moving IR targets with competitive accuracy and low computation time. A common method is enhancing targets and suppressing the clutter in background, but pixel values of background and small targets are close to each other, so most of the current classic suppressing models are not suitable. In order to effectively resolve this problem, a novel spatial–temporal vector difference measure is proposed for moving object detection in IR videos. First, to enhance targets, a new local vector dissimilarity measure is used to describe the dissimilarity between a small target and its surrounding background and to calculate the spatial saliency map. Then, the local mean of successive frames is formed as a temporal vector, and we calculate the temporal saliency map using the range of the corresponding vector. Afterward, the fusion saliency map is measured by taking both feature maps into account. Finally, small objects are extracted by an adaptive segmentation method. Extensive qualitative and quantitative experimental results demonstrate that the proposed model is more efficient and reaches competitive accuracy in terms of F-measure in the public dataset compared to state-of-the-art spatial–temporal algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call