Abstract

Image fusion is viewed as perhaps the best procedure to confine the level of uncertainty and convey a significant feeling of picture lucidity. It is a strategy of combining the appropriate information/data from a group of pictures into a solitary resultant (intertwined) picture that would render higher picture proficiency and clarity. Until now, the image fusion procedures looked like Discrete Wavelet Transform (DWT) or pixel-based methodologies. These already established methods have limited effectiveness. Also, they fail to deliver the typical outcomes like edge perseverance, spatial resolution, and shift-invariance. To get rid of these demerits, in this paper, we have proposed a hybrid approach called Principal Component Stationary Wavelet Transform (PC-SWT) that combines Principal Component Analysis (PCA) and Stationary Wavelet Transform. SWT is an algorithm that defines the wavelet transformation to compensate for the absence of translation invariance in DWT. PCA is a methodical approach that utilizes an orthogonal transformation in order to transform a group of perceptions of possibly correlated values into the principal components, which are linearly uncorrelated variables. When compared to conventional methods, PC-SWT intends to obtain a more efficient, clear, and superior quality image. This fused image is expected to have all of its preserved edges as well as its spatial resolution. In addition to this, it can also be used to deal with shift-invariance.

Highlights

  • INTRODUCTIONMultisensory Image Fusion is defined as the process of Revised Manuscript Received on June 08, 2020. * Correspondence Author

  • Multisensory Image Fusion is defined as the process of Revised Manuscript Received on June 08, 2020. * Correspondence Author© The Authors

  • As the space-borne sensors are increasing at a rapid pace, it has led to a heightened need for various image fusion algorithms in remote sensing applications [2].Most of the cases in image processing need a single image having a very high spatial and high spectral resolution

Read more

Summary

INTRODUCTION

Multisensory Image Fusion is defined as the process of Revised Manuscript Received on June 08, 2020. * Correspondence Author. Combining utilitarian information from two or more images in order to form a single image. This results in the formation of a more informed image as compared to the input images. As the space-borne sensors are increasing at a rapid pace, it has led to a heightened need for various image fusion algorithms in remote sensing applications [2].Most of the cases in image processing need a single image having a very high spatial and high spectral resolution. The methods of image fusion make it possible to combine various information sources. Some of the commonly used image fusion methods are wavelet transform image fusion, image fusion based on intensity-Hue-Saturation transformation, and Principal Component Analysis [5]

Stationary Wavelet Transform (SWT)
Principal Component Analysis (PCA)
Applications of Image Fusion
Fusion in the Presence of Blur
Image Registration
Video Fusion
Methodology with Pseudocode of the PC-SWT Algorithm
Flowchart of the PC-SWT Algorithm
Performance Assessment Criteria
CONCLUSION

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.