In challenging lighting conditions, infrared detectors have become vital tools for enhancing visual perception, overcoming the limitations of visible cameras. However, inherent imaging principles and manufacturing constraints confine infrared imaging systems to grayscale, significantly impacting their utility. In comparison to visible imagery, infrared images lack detailed semantic information, color representation, and suffer from reduced contrast. While existing infrared image colorization techniques have made significant progress in improving color quality, challenges such as erroneous semantic color prediction and blurred depiction of fine details persist. Acquiring paired color images corresponding to real-world infrared scenarios poses substantial difficulties, exacerbating challenges in cross-domain colorization of infrared images. To address these critical issues, this paper introduces an innovative approach utilizing contrastive learning for unsupervised cross-domain mapping between unpaired infrared and visible color images. Additionally, we introduce a color feature selection attention module guiding rational infrared image coloring. The proposed method employs the Residual Fusion Attention Network (RFANet) as a generator, enhancing the encoder’s ability to represent color and structural features. Furthermore, to ensure structural content consistency and enhance overall color style matching accuracy, we design a comprehensive joint global loss function integrating both detailed content and color style. Experimental evaluations on publicly available datasets demonstrate the superior performance of the proposed unsupervised cross-domain colorization method for infrared images compared to previous approaches.
Read full abstract