Abstract

Positron emission tomography (PET) and magnetic resonance (MR) images are often used in clinical diagnosis. These images offer two different interpretations of one region of a brain; PET displays the functional activity and functional connectivity of the brain, while MR provides excellent structural information of that. Fusing both information in a single image leads to better analysis, but different sensor technologies make it difficult to devise a proper fusion scheme. Over the past few years, sparse representation (SR) has achieved great success in fusing PET and MR images. Some efforts have been made to expand SR in different feature spaces. However, these methods suffer from the main drawback, which is the absence of mapping between the two feature spaces. To address this problem, a nonparametric Bayesian technique is considered to learn dictionaries and the mapping for the two feature spaces. In the proposed method, dictionaries for two feature spaces and the mapping between them are obtained from input PET and MR images adaptively. In addition, our method automatically determines the number of dictionary atoms and the sparsity level of the coefficients. The proposed method was compared with some well-known SR-based image fusion methods visually and quantitatively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call