Abstract

Medical image fusion has been used to derive the useful complimentary information from multimodality imaging. The proposed methodology introduces fusion approach for robust and automatic extraction of information from segmented images of different modalities. This fusion strategy is implemented in multiresolution domain using wavelet transform- and genetic algorithm-based search technique to extract maximum complementary information. The analysis of input images at multiple resolutions is able to extract more fine details and improves the quality of the composite fused image. The proposed approaches are also independent of any manual marking or knowledge of fiducial points and start the fusion procedure automatically. The performance of fusion scheme implemented on segmented brain images has been evaluated computing mutual information as similarity measuring matrix. Prior to fusion process, images are being segmented using different segmentation techniques like fuzzy C-mean and Markov random field models. Experimental results show that Gibbs- and ICM-based segmentation approaches related to Markov random field perform over the fuzzy C-mean and which are being used prior to GA-based fusion process for MR T1, MR T2 and MR PD images of section of human brain.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.