Abstract

By merging different portraits of a particular scene, image fusion attempts to create a blended image that combines details from all the images. Infrared (IR) and visible image fusion can be accomplished in a variety of ways, including recent deep-learning-based techniques. However, edge-preserving filter (EPF) based fusion works well since it retains all the information from both images. Local filtering-based techniques, on the other hand, limit the fusion performance by introducing multiple gradient reversal artifacts and halos. This work presents an advanced IR and visible image fusion approach depending on three-level decomposition using multi-level co-occurrence filtering, which aims to overcome the common shortfalls such as halo effects seen in existing EPF based fusion. The reference images are decomposed in to base layer, small-scale layers and large-scale layers using multi-level co-occurrence filtering (MLCoF). Since most of the low frequency details are contained in the base layer, the conventional merging strategy by averaging is replaced with novel foreground information map (FIM) based fusion strategy. Small-scale layers are combined by applying max-absolute fusion strategy. A novel weight-map guided edge preserving fusion strategy is put forward for the integration of large-scale layers. Later, fused image is generated by the superposition of these different layers. Subjective visual and objective quantitative analysis shows that the suggested technique attains more notable performance in contrast with other modern fusion methods including many deep-learning techniques. In terms of visual perspective view, the results produced by the proposed approach are superior and include all details from both images. Additionally, it produces outcomes free of gradient reversal and halo artifacts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call