Abstract

In previous years, we developed Automatic Target Recognition (ATR) algorithms by combining layers of the open-source You Only Look Once (YOLOv2) detection model with customized Convolutional Neural Network (CNN) feature extraction layers to recognize targets in Infrared (IR) images. Our work showed that ATR for IR performed significantly better during night-time than during daytime. In this study, we demonstrate that fusing EO and IR images using pixel-based and decision-based sensor fusion can improve daytime ATR performance significantly. Traditional Automatic Target Detection (ATD) metrics do not account for misclassification while traditional target classification metrics do not count missed detections. We have developed a novel approach for evaluating ATR performance that bridges traditional target detection and target classification metrics based on an extended confusion matrix (ECM) which allows us to accurately characterize Probability of Detection (P<inf>d</inf>), Probability of False Alarm (P<inf>fa</inf>), and the tradeoffs between the two for ATR applications. After running the ATR detector at multiple Confidence Score thresholds, we can obtain and describe the detection performance at different P<inf>fa</inf> levels using the Receiver Operating Characteristic (ROC) curves to show the comprehensive relationship between P<inf>d</inf> vs. P<inf>fa</inf>. Combining EO and IR fusion approaches with the ECM, we demonstrate improvements in P<inf>d</inf> of 11&#x0025; with pixel-based fusion and 13-17&#x0025; with decision-based fusion, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call