Abstract

Most existing image fusion methods assume that at least one input image contains high-quality information at any place of an observed scene. Thus, these fusion methods will fail if every input image is degraded. To address this issue, this study proposes a novel fusion framework that integrates image fusion based on spectral total variation (TV) method and image enhancement. For spatially varying multiscale decompositions generated by the spectral TV framework, this study verifies that the decomposition components can be modeled efficiently by tailed $\alpha$ -stable-based random variable distribution (TRD) rather than the commonly used Gaussian distribution. Consequently, salience and match measures based on TRD are proposed to fuse each sub-band decomposition. The spatial intensity information is also adopted to fuse the remainder of the image decomposition components. A sub-band adaptive gain function family based on TV spectrum and space variation is constructed for fused multiscale decompositions to enhance fused image simultaneously. Finally, numerous experiments with various multisensor image pairs are conducted to evaluate the proposed method. Experimental results show that even if the input images are degraded, the fused image obtained by the proposed method achieves significant improvement in terms of edge details and contrast while extracting the main features of the input images, thereby achieving better performance compared with the state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call