Differences in the imaging mechanisms of infrared and visible light images lead to differences in the way their visually meaningful gradients are formed. Existing fusion methods use the same feature extractor to extract features from the source video frames, which ignores the differences in the gradients of video frames from different modalities. In this paper, we propose an infrared and visible light video fusion method based on chaos theory and proportional integral differential (PID) control. Firstly, for the Lorentz chaotic system, we give it initial values and parameters, and iterate it to obtain three scrambling sequences, through which the source video frames are scrambled in the rows, columns, and diagonal directions respectively to eliminate their visually meaningful gradients, so that the features extracted are of comparable scales in the same layer, and the fusion process can be carried out in the scale-consistent space. Second, we propose a structure-aware relative total variation feature extraction method (saRTV) for the two-scale decomposition of source video frames, which can transfer more features of source video frames to the detail layer. Then, based on our previous work, this paper introduces PID control to construct a closed-loop control system through transfer function design, controller design and measurement function design. This control system is used to fuse the detail layer, so as to realize the real-time guidance of the source video frames to the fusion process. Experiments on public datasets demonstrate that our method has better performance compared to some state-of-the-art methods.
Read full abstract