Abstract

Infrared and visible image fusion is a beneficial processing task for Unmanned Aerial Vehicle (UAV) surveillance, which can improve visibility by combining the advantages of the infrared camera and the visible light camera. An embedded onboard solution is necessary for UAV-based surveillance missions because it reduces the amount of data that are transmitted to the ground. In this paper, we propose an infrared and visible light image fusion method and implement it on two platforms with commonly used HW accelerators for embedded vision applications: Zedboard (ARM + FPGA) and NVIDIA TX1 (ARM + GPU), and compare their performances. To verify the usefulness of image fusion, we carry out sufficient experiments to prove that image fusion can improve the target detection ability of a UAV in different scenes. The detection rate for target detection is up to 0.926 in our experiments. The execution times on the ZedBoard and the TX1 are, respectively, 205.3 FPS and 36.6 FPS (38 $$\times$$ and 6.7 $$\times$$ in comparison to an ARM Cortex-A9 processor). Our results also show that the ZedBoard achieves an energy/frame reduction ratio of 7.1 $$\times$$ and 18.9 $$\times$$ respectively compared to the TX1 and the ARM CPU. This work is based on a UAV platform designed by ourselves, and all image sets are real scenes that we have captured. This demonstrates that the proposed method is viable and reflects the actual needs of real UAV surveillance systems.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.