Infrared and visible image fused(IVIF) results normally suffer from detail loss, noise occurrence, low contrast, and blurred edges. In this paper, a new method is proposed to address the detail loss, low contrast, and blurring issue in IVIF. Specifically, visible images are enhanced by guided filter and high dynamic range compression. Infrared images are normalized by a linear transformation. Then we use blur and clear discrimination to detect salient pixels between infrared and visible images. A fully weight shared multi-path residual neural network is proposed for blur discrimination between infrared and visible image pixels in the same position. Clear pixels are treated as salient pixels, which contribute more to fused images than blur pixels. The output of our proposed network is a binary classification map for blur and clear discrimination, which is treated as our fusion weight map in the fusion stage. To deal with the discontinuous problem, we compute the two distance-transformed maps of the binary classification map and its complementary map. The two distance-transformed maps are used as weight maps to fuse the enhanced infrared and visible images. Finally, we use single scale retinex (SSR) to further enhance our fused images. The experimental results in public IVIF datasets demonstrate the superior performance of our proposed approach over other state-of-the-art methods in terms of both subjective visual quality and objective metrics. The source code is available in https://github.com/eyob12/Multi_path_residual_neural_network_based_IVIF
Read full abstract