Abstract

The purpose of infrared and visible image fusion is to compensate for the incomplete imaging of a single sensor. Although there are existing fusion algorithms, they tend to overlook the guidance that infrared and visible light images can provide in formulating fusion strategies. Consequently, the resulting fused image lacks a genuine reference, and the designed fusion strategy lacks adaptive ability. To overcome this challenge, this study proposes a novel fusion framework based on cybernetics, referred to as FusionPID, that utilizes a proportional integral differential (PID) control system to fuse infrared and visible images. The framework first extracts source map features through an improved Mean Shift feature clustering algorithm in the field of machine vision. This information is then used to construct the transfer function of the control system, enhancing the potential of the fused images for downstream target detection tasks. Secondly, a comprehensive measurement function is designed to judge the difference between the fused image and the source image. The target of the fused process is guided by the source image through the measurement value and the feedback function of the control system. Finally, a PID controller is designed, which can adjust the output adaptively according to the difference between the source image and the fusion result. FusionPID can rely on the feedback ability of the control system for different fusion tasks, so that the fusion image can maintain both the thermal radiation in the infrared image and the texture in the visible image. Experiments show that our FusionPID has advantages over the most advanced methods in maintaining significant contrast and rich texture. In addition, the fused image generated by this method can be applied to downstream target detection tasks to improve the detection performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call