Abstract
Fusion of infrared and visible images is a significant research area in image analysis and computer vision. The purpose of infrared and visible image fusion is to combine the complementary image information of the source images into a fused image. Thus, it is vital to efficiently represent the important image information of the source images and choose rational fusion rules. To achieve this aim, an image fusion method using multiscale directional nonlocal means (MDNLM) filter is proposed in this paper. The MDNLM combines the feature of preserving edge information by the nonlocal means filter with the capacity of capturing directional image information by the directional filter bank, which can effectively represent the intrinsic geometric structure of images. The MDNLM is a multiscale, multidirectional, and shift-invariant image decomposition method, and we use it to fuse infrared and visible images in this paper. First, the MDNLM is discussed and used to decompose the source images into approximation subbands and directional detail subbands. Then, the approximation and directional detail subbands are fused by a local neighborhood gradient weighted fusion rule and a local eighth-order correlation fusion rule, respectively. Finally, the fused image can be obtained through the inverse MDNLM. Comparison experiments have been performed on different image sets, and the results clearly demonstrate that the proposed method is superior to some conventional and recent proposed fusion methods in terms of the visual effects and objective evaluation.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have