Abstract

With the continuous advancement of drone technology, drones are demonstrating a trend toward autonomy and clustering. The detection of airborne objects from the perspective of drones is critical for addressing threats posed by aerial targets and ensuring the safety of drones in the flight process. Despite the rapid advancements in general object detection technology in recent years, the task of object detection from the unique perspective of drones remains a formidable challenge. In order to tackle this issue, our research presents a novel and efficient mechanism for adjacent frame fusion to enhance the performance of visual object detection in airborne scenarios. The proposed mechanism primarily consists of two modules: a feature alignment fusion module and a background subtraction module. The feature alignment fusion module aims to fuse features from aligned adjacent frames and key frames based on their similarity weights. The background subtraction module is designed to compute the difference between the foreground features extracted from the key frame and the background features obtained from the adjacent frames. This process enables a more effective enhancement of the target features. Given that this method can significantly enhance performance without a substantial increase in parameters and computational complexity, by effectively leveraging the feature information from adjacent frames, we refer to it as an efficient adjacent frame fusion mechanism. Experiments conducted on two challenging datasets demonstrate that the proposed method achieves superior performance compared to existing algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call