Abstract

Multifocus image fusion technology uses a mathematical model to integrate multifocus areas to obtain full-focus, clear images. The fusion method based on convolutional sparse representation (CSR) trains and “learns” translation-invariant filters, thereby addressing the missing signal infrastructure and high redundancy of the patch-based method. Convolutional dictionary learning and CSR rely on the alternating-direction method of multipliers and ignore model matching between the training and testing phases, leading to convergence difficulties due to the tricky parameter tuning. The block proximal extrapolated gradient method using the majorization and gradient-based restarting scheme (reG-BPEG-M) adopts the driving force coefficient formula and adaptive restart rule to solve the model mismatch problem. We introduce reG-BPEG-M into multifocus image fusion to update filters and sparse code using two-block and multiblock schemes. Compared with other state-of-the-art fusion methods, our strategy reduces model mismatch and improves the convergence of fusion for gray and color multifocus images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call