Multi-focus image fusion merges multiple images captured from different focused regions of a scene to create a fully-focused image. Convolutional sparse coding (CSC) methods are commonly employed for accurate extraction of focused regions, but they often disregard computational costs. To overcome this, an online convolutional sparse coding (OCSC) technique was introduced, but its performance is still limited by the number of filters used, affecting overall performance negatively. To address these limitations, a novel approach called Sample-Dependent Dictionary-based Online Convolutional Sparse Coding (SCSC) was proposed. SCSC enables the utilization of additional filters while maintaining low time and space complexity for processing high-dimensional or large data. Leveraging the computational efficiency and effective global feature extraction of SCSC, we propose a novel method for multi-focus image fusion. Our method involves a two-layer decomposition of each source image, yielding a base layer capturing the predominant features and a detail layer containing finer details. The amalgamation of the fused base and detail layers culminates in the reconstruction of the final image. The proposed method significantly mitigates artifacts, preserves fine details at the focus boundary, and demonstrates notable enhancements in both visual quality and objective evaluation of multi-focus image fusion.
Read full abstract