Abstract

The objective of image fusion is to combine information from multiple images of the same scene. The result of image fusion is a single image which is more suitable for human and machine perception or further image-processing tasks. In this paper, a generic image fusion framework based on multiscale decomposition is studied. This framework provides freedom to choose different multiscale decomposition methods and different fusion rules. The framework includes all of the existing multiscale-decomposition-based fusion approaches we found in the literature which did not assume a statistical model for the source images. Different image fusion approaches are investigated based on this framework. Some evaluation measures are suggested and applied to compare the performance of these fusion schemes for a digital camera application. The comparisons indicate that our framework includes some new approaches which outperform the existing approaches for the cases we consider.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call