Abstract

Abstract. Multiple sensors equipped on the Unmanned Aerial Vehicle (UAV) enables the acquisition of multi-modal and multi-source remote sensing data. UAV remote sensing usually faces with real-time or near-real-time tasks in complex and highly dynamic environments, such as disaster monitoring, traffic management, border patrol and so on. Under these conditions, the image fusion algorithm needs to be high efficiency, precision and reliability. In this paper, we proposed an intelligent real-time fusion network for UAV multi-source remote sensing data based on AI brain-like chips, and deployed the algorithm on the UAV platform to achieve online high-efficiency computing. Firstly, we have developed a novel image fusion algorithm named SFNet for infrared and visible image fusion based on ShuffleNetv2. Then, we use ZCA and l1-norm to process the remodeled deep feature. The weight maps are generated by bi-cubic interpolation and soft-max operation. Finally, the fused image is reconstructed by weighted-average operation. The proposed SFNet is deployed on the Lynxi KA200 brain computing chip, and a comprehensive inference test is carried out with UAV remote sensing data. Several State-Of-The-Art (SOTA) data fusion algorithms are deployed on the same chip for experimental comparison. The proposed SFNet is proved to have faster inference speed and better feature extraction results on brain-like chips. It is more suitable for real-time UAV remote sensing image fusion tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call