Abstract

Feature extraction is a collection of the necessary detailed information from the given source, which holds the information for further analysis. The quality of the fused image depends on many parameters, particularly its directional selectivity and shift-invariance. On the other hand, the traditional wavelet-based transforms produce ringing distortions and artifacts due to poor directionality and shift invariance. The Dual-Tree Complex Wavelet Transforms (DTCWT) combined with Stationary Wavelet Transform (SWT) as a hybrid wavelet fusion algorithm overcomes the deficiencies of the traditional wavelet-based fusion algorithm and preserves the directional and shift invariance properties. The purpose of SWT is to decompose the given source image into approximate and detailed sub-bands. Further, approximate sub-bands of the given source are decomposed with DTCWT. In this extraction, low-frequency components are considered to implement Texture Energy Measures (TEM), and high-frequency components are considered to implement the absolute-maximum fusion rule. For the detailed sub-bands, the absolute-maximum fusion rule is implemented. The texture energy rules have significantly classified the image and improved the output image’s accuracy after fusion. Finally, inverse SWT is applied to generate an extended fused image. Experimental results are evaluated to show that the proposed approach outperforms approaches reported earlier. This paper proposes a fusion method based on SWT, DTCWT, and TEM to address the inherent defects of both the Parameter Adaptive-Dual Channel Pulse coupled neural network (PA-DCPCNN) and Multiscale Transform-Convolutional Sparse Representation (MST-CSR).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call