Abstract

Image fusion is the process of combining information from two or more sensed or acquired images into a single composite image that is more informative and becomes more suitable for visual processing or computer processing. Image fusion fully utilizes much complementary and redundant information of the original images. The aim of image fusion is to integrate complementary and redundant information from multiple images to create a composite image that contains a better description of the scene than any of the individual source images. The objective is to reduce uncertainty, minimize redundancy in the output, and maximize relevant information pertaining to an application or a task. This paper focuses on feature level image fusion based on dual-tree complex wavelet transform (DT-CWT). A dual-tree complex wavelet transforms and watershed transform is used to segment the features of the input images, either jointly or separately, to produce the region map. Characteristics of each region are calculated and a region-based approach is used to fuse the images, region by region. The images used are already registered. Misregistration is a major source of error in image fusion.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call