Abstract

Medical Image Fusion (MIF) can improve the performance of medical diagnosis, treatment planning and image-guided surgery significantly through providing high-quality and rich-information medical images. Traditional MIF techniques suffer from common drawbacks such as: contrast reduction, edge blurring and image degradation. Pulse-coupled Neural Network (PCNN) based MIF techniques outperform the traditional methods in providing high-quality fused images due to its global coupling and pulse synchronization property; however, the selection of significant features that motivate the PCNN is still an open problem and plays a major role in measuring the contribution of each source image into the fused image. In this paper, a medical image fusion algorithm is proposed based on the Non-subsampled Contourlet Transform (NSCT) and the Pulse-Coupled Neural Network (PCNN) to fuse images from different modalities. Local Average Energy is used to motivate the PCNN due to its ability to capture salient features of the image such as edges, contours and textures. The proposed approach produces a high quality fused image with high contrast and improved content in comparison with other image fusion techniques without loss of significant details on both levels: the visual and the quantitative.

Highlights

  • A numerous imaging modalities such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Ultrasound, Positron Emission Tomography (PET), and Single Photon Emission Computed Tomography (SPECT) reflect information about the human body from different views

  • The Pulse-coupled Neural Network (PCNN) parameters were configured to k x l = 3 x 3, W = [0.707 1 0.707; 1 0 1; 0.707 1 0.707], β = 0.2, and the sliding window of the local average energy = 3 x 3

  • Since fusing medical images manually is time consuming and subject to human error, this paper presents an Medical Image Fusion (MIF) approach based on Non-subsampled Contourlet Transform (NSCT) and local average energy-motivated PCNN to fuse the medical images

Read more

Summary

INTRODUCTION

A numerous imaging modalities such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Ultrasound, Positron Emission Tomography (PET), and Single Photon Emission Computed Tomography (SPECT) reflect information about the human body from different views. A way is needed to extract and combine information from different modalities to produce clear and rich-information images to provide more reliable and accurate diagnosis. Combining such information manually is time consuming, subject to human error and based on radiologist's experience which may produce misleading results. Any fusion scheme should fulfill some generic requirements: First, all the salient features and significant information in the source images should be present in the fused result. Activity level refers to the local energy or the amount of information present in an image pixel or coefficient [4] It can be measured for a single pixel value or by taking into consideration the surrounding neighbors of the pixel. The most common fusion rules are Min, Max and Average

Mathematical methods
Simplified Pulse –Coupled Neural Network
RESULTS AND DISCUSSION
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call