Abstract

Joint image segmentation and registration of multi-modality images is a crucial step in the field of image prepossessing. The sensitivity of joint segmentation and registration models to noise is a significant challenge. During the registration process of multi-modal images, the similarity measure plays a vital role in measuring the results as a standard. Accordingly, an improved joint model for registering and segmenting multi-modality images is proposed, by utilising the Bhattacharyya distance measure to achieve improved noise robustness of the proposed model as compared to the existing model using the mutual information metric. The proposed model is applied to various medical and synthetic noisy images of multiple modalities. Moreover, the dataset images used in this study have been obtained from well-known, freely available BRATS 2015 and CHAOS datasets, where the proposed model produces satisfactory results as compared to the existing model. Experimental results show that the proposed model outperforms the existing model in terms of the Bhattacharyya distance measure in noisy images. Statistical analysis and comparison are performed through the relative reduction of the new distance measure, Dice similarity coefficient, Jaccard similarity coefficient and Hausdorff distance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call