Infrared and visible light image fusion can solve the limitations of single-type visual sensors and can boost the target detection performance. However, since the traditional fusion strategy lacks the controllability and feedback mechanism, the fusion model cannot precisely perceive the relationship between the requirements of the fusion task, the fused image quality, and the source image features. To this end, this paper establishes a fusion model based on the optimal controlled object and control mode called FusionOC. This method establishes two types of mathematical models of the controlled objects by verifying the factors and conflicts affecting the quality of the fused image. It combines the image fusion model with the quality evaluation function to determine the two control factors separately. At the same time, two proportional-integral-derivative (PID) control and regulation modes based on the backpropagation (BP) neural network are designed according to the control factor characteristics. The fusion system can adaptively select the regulation mode to regulate the control factor according to the user requirements or the task to make the fusion system perceive the connection between the fusion task and the result. Besides, the fusion model employs the feedback mechanism of the control system to perceive the feature difference between the fusion result and the source image, realize the guidance of the source image feature to the entire fusion process, and improve the fusion algorithm's generalization ability and intelligence level when handling different fusion tasks. Experimental results on multiple public datasets demonstrate the advantages of FusionOC over advanced methods. Meanwhile, the benefits of our fusion results in object detection tasks have been demonstrated.
Read full abstract