Abstract

PurposeThe registration of medical images often suffers from missing correspondences due to inter-patient variations, pathologies and their progression leading to implausible deformations that cause misregistrations and might eliminate valuable information. Detecting non-corresponding regions simultaneously with the registration process helps generating better deformations and has been investigated thoroughly with classical iterative frameworks but rarely with deep learning-based methods.MethodsWe present the joint non-correspondence segmentation and image registration network (NCR-Net), a convolutional neural network (CNN) trained on a Mumford–Shah-like functional, transferring the classical approach to the field of deep learning. NCR-Net consists of one encoding and two decoding parts allowing the network to simultaneously generate diffeomorphic deformations and segment non-correspondences. The loss function is composed of a masked image distance measure and regularization of deformation field and segmentation output. Additionally, anatomical labels are used for weak supervision of the registration task. No manual segmentations of non-correspondences are required.ResultsThe proposed network is evaluated on the publicly available LPBA40 dataset with artificially added stroke lesions and a longitudinal optical coherence tomography (OCT) dataset of patients with age-related macular degeneration. The LPBA40 data are used to quantitatively assess the segmentation performance of the network, and it is shown qualitatively that NCR-Net can be used for the unsupervised segmentation of pathologies in OCT images. Furthermore, NCR-Net is compared to a registration-only network and state-of-the-art registration algorithms showing that NCR-Net achieves competitive performance and superior robustness to non-correspondences.ConclusionNCR-Net, a CNN for simultaneous image registration and unsupervised non-correspondence segmentation, is presented. Experimental results show the network’s ability to segment non-correspondence regions in an unsupervised manner and its robust registration performance even in the presence of large pathologies.

Highlights

  • Image registration describes the process of finding an optimal deformation that transforms one image such that it is similar to another image and corresponding image structures align spatially

  • We used rough anatomical labels to introduce weak supervision into the registration task and showed that non-correspondence segmentation and image registration network (NCR-Net) may be trained fully unsupervised without significant drop in performance

  • Based on outlier detection in the image distance measure and without the need for manual segmentations of lesions, NCRNet learned to segment regions containing altered or newly developed pathologies in optical coherence tomography (OCT) images

Read more

Summary

Introduction

Image registration describes the process of finding an optimal deformation that transforms one image such that it is similar to another image and corresponding image structures align spatially. This is done by minimizing a loss functional composed of an image distance measure and a regularizer that smooths the deformation field. Such methods are based on the assumption that for every pixel in the moving image there exists a corresponding pixel in the fixed image. When registering images containing evolving pathologies, the generation of a ground truth of non-correspondences is often not feasible

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.