The imaging results of infrared and visible light sensors are highly complementary. Thus, combining the two images can compensate for the shortcomings of a single sensor. However, previous fusion methods have not fully utilized the collaborative guidance of infrared and visible images in the fusion process, leading to a poor correlation between extracted features and cannot guarantee feature fusion accuracy. In order to address these challenges, this paper proposes a collaborative fusion method for infrared and visible images based on pulse-coupled neural networks (PCNN) and proportional-integral-derivative (PID) control systems, called FusionCPP. First, a coupled neural network with a dual pulse structure is designed for the feature extraction section. Unlike traditional PCNN, this network includes two pulse generators, one of which is utilized to extract the salient features of the image, while the other is employed to extract the common pulse layer of the source image and multiply the pulse layer by the coupling iteration value to obtain the base layer. Second, a detail layer is obtained by subtracting the base layer from the source image. This method accurately extracts salient and detailed features based on the coupling between source images. Finally, a closed-loop PID control system is utilized as the fusion strategy in the image fusion task, which judges the difference between the fusion result and the source image in real-time through the feedback effect of the system. The controller adaptively adjusts the fusion weight based on the difference to fuse image features into the new image accurately. Experimental results indicate that the proposed FusionCPP can maintain significant contrast and rich textures. Besides, the proposed FusionCPP has extended to multi-focus image fusion and target detection tasks to verify its effectiveness.
Read full abstract