Given the recent proliferation of Unmanned Aerial Systems (UASs) and the consequent importance of counter-UASs, this project aims to perform the detection and tracking of small non-cooperative UASs using Electro-optical (EO) and Infrared (IR) sensors. Two data integration techniques, at the decision and pixel levels, are compared with the use of each sensor independently to evaluate the system robustness in different operational conditions. The data are submitted to a YOLOv7 detector merged with a ByteTrack tracker. For training and validation, additional efforts are made towards creating datasets of spatially and temporally aligned EO and IR annotated Unmanned Aerial Vehicle (UAV) frames and videos. These consist of the acquisition of real data captured from a workstation on the ground, followed by image calibration, image alignment, the application of bias-removal techniques, and data augmentation methods to artificially create images. The performance of the detector across datasets shows an average precision of 88.4%, recall of 85.4%, and mAP@0.5 of 88.5%. Tests conducted on the decision-level fusion architecture demonstrate notable gains in recall and precision, although at the expense of lower frame rates. Precision, recall, and frame rate are not improved by the pixel-level fusion design.
Read full abstract