Abstract

Modern technology has enabled development of low-cost wireless imaging sensors of various modalities that can be deployed for monitoring scenes. This advance has been strongly motivated by both military and civilian applications, including those related to health care, battlefield surveillance, and environmental monitoring. Multimodal sensors provide diverse degradation, thermal, and visual characteristics. Image fusion combines visual information from various sources into a single representation to facilitate processing by an operator or computervision system. Fusion techniques can be divided into spatial and transform-domain methods.1 The latter enable efficient identification of an image’s salient features. Transformations that have been suggested for image fusion include dual-tree wavelet transforms and pyramid decomposition.1 We recently proposed2, 3 an image-fusion framework—featuring improved performance—that is based on image-analysis bases trained using independent-component analysis (ICA). Receptive fields of simple cells in the mammalian primary visual cortex are usually spatially localized, oriented, and bandpass. Such filter responses can be derived from unsupervised learning of independent visual features or sparse linear codes for natural scenes.4 Basis training employing ICA4 for image denoising through sparse-code shrinkage improved performance with respect to the use of wavelets. The bases were trained by extracting a population of local patches from similar-content images and then processed by the FastICA algorithm4 to estimate the transformation and its inverse. ICA bases are closely related to wavelets and Gabor functions because they represent localized-edge features. They havemore degrees of freedom than wavelets, however, because they adapt to arbitrary orientations. Discrete and dual-tree wavelet transforms have only two and six distinct orientations, respectively.4 ICA bases do not offer a multilevel representation—as do wavelets or pyramid decomposition—nor are they shift invariant. This invariance can Figure 1. Proposed image-fusion framework. T{ } and T−1{ }: Independent-component analysis (ICA)-trained transformations and their inverse. uk(t): Image coefficients in the ICA domain.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.