DHR-Net: Dynamic Harmonized registration network for multimodal medical images.

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

DHR-Net: Dynamic Harmonized registration network for multimodal medical images.

Similar Papers
  • Conference Article
  • Cite Count Icon 2
  • 10.1109/icassp43922.2022.9746324
Unsupervised Hierarchical Translation-Based Model for Multi-Modal Medical Image Registration
  • May 23, 2022
  • Xinru Dai + 3 more

Deformable registration of multi-modal medical images is a challenging task in medical image processing due to the differences in both appearance and structure. We propose an unsupervised hierarchical translation-based model to perform a coarse to fine registration of multi-modal medical images. The proposed model consists of three parts: a coarse registration network, a modal translation network and a fine registration network. First, the coarse registration network learns to obtain the coarse deformation field, which is applied as structure-preserving information to generate a translated image by the modal translation network. Then, the translated image as enhancing information combined with the original images are used to derive a fine deformation field in the fine registration network. Furthermore, the final deformation field is composed from the coarse and the fine deformation fields. In this way, the proposed model can learn high accurate deformation field to implement multi-modal medical image registration. Experiments on two multi-modal brain image datasets demonstrate the effectiveness of this model.

  • Research Article
  • Cite Count Icon 3
  • 10.1007/s13534-023-00344-1
L2NLF: a novel linear-to-nonlinear framework for multi-modal medical image registration.
  • Jan 10, 2024
  • Biomedical engineering letters
  • Liwei Deng + 4 more

In recent years, deep learning has ushered in significant development in medical image registration, and the method of non-rigid registration using deep neural networks to generate a deformation field has higher accuracy. However, unlike monomodal medical image registration, multimodal medical image registration is a more complex and challenging task. This paper proposes a new linear-to-nonlinear framework (L2NLF) for multimodal medical image registration. The first linear stage is essentially image conversion, which can reduce the difference between two images without changing the authenticity of medical images, thus transforming multimodal registration into monomodal registration. The second nonlinear stage is essentially unsupervised deformable registration based on the deep neural network. In this paper, a brand-new registration network, CrossMorph, is designed, a deep neural network similar to the U-net structure. As the backbone of the encoder, the volume CrossFormer block can better extract local and global information. Booster promotes the reduction of more deep features and shallow features. The qualitative and quantitative experimental results on T1 and T2 data of 240 patients' brains show that L2NLF can achieve excellent registration effect in the image conversion part with very low computation, and it will not change the authenticity of the converted image at all. Compared with the current state-of-the-art registration method, CrossMorph can effectively reduce average surface distance, improve dice score, and improve the deformation field's smoothness. The proposed methods have potential value in clinical application.

  • Conference Article
  • Cite Count Icon 41
  • 10.1109/isbi.2012.6235644
Block-matching strategies for rigid registration of multimodal medical images
  • May 1, 2012
  • Olivier Commowick + 2 more

We propose and evaluate a new block-matching strategy for rigid-body registration of multimodal or multisequence medical images. The classical algorithm first matches points of both images by maximizing the iconic similarity of blocks of voxels around them, then estimates the rigid-body transformation best superposing these matched pairs of points, and iterates these two steps until convergence. In this formulation, only discrete translations are investigated in the block-matching step, which is likely to cause several problems, most notably a difficulty to tackle large rotations and to recover subvoxel transformations. We propose a solution to these two problems by replacing the original, computationally expensive, exhaustive search over translations by a more efficient optimization over rigid-body transformations. The optimal global transformation is then computed based on these local blockwise rigid-body transformations, and these two steps are iterated until convergence. We evaluate the accuracy, robustness, capture range and run time of this new block-matching algorithm on both synthetic and real MRI and PET data, demonstrating faster and better registration than the translation-based block-matching algorithm.

  • Conference Article
  • Cite Count Icon 1
  • 10.1145/3177404.3177427
A New 3D Multi-modality Medical Bone Image Registration Algorithm
  • Dec 27, 2017
  • Huanjie Tao + 1 more

Three-dimensional (3D) multi-modality medical bone image registration is an important technology in surgical application, especially in large computer-aided orthopedic surgery. To improve registration accuracy, we propose a new 3D multi-modality medical bone image registration algorithm based on local features through analyzing the bone structure. In this method, the image Hessian matrix is introduced for local features extraction, and the local behavior of the 3D bone image is described by the eigenvalues of Hessian matrix. This method can automatically extract and select the most representative feature points (blob-like structure) in different scales. Then we adopt the idea of triangle matching to get stereo matching point pairs. Improve random sample consensus (RANSAC) algorithm is adopted to remove wrong matching point pairs. We use the right matching point pairs to establish rigid transformation model and solve this non-linear model by Levenberg-Marquardt algorithm to get geometric transformation parameters. Simulated experiments and real experiments demonstrate that the proposed method can achieve a high image registration accuracy.

  • PDF Download Icon
  • Research Article
  • 10.3390/app13021040
Reverse-Net: Few-Shot Learning with Reverse Teaching for Deformable Medical Image Registration
  • Jan 12, 2023
  • Applied Sciences
  • Xin Zhang + 3 more

Multimodal medical image registration has an important role in monitoring tumor growth, radiotherapy, and disease diagnosis. Deep-learning-based methods have made great progress in the past few years. However, its success depends on large training datasets, and the performance of the model decreases due to overfitting and poor generalization when only limited data are available. In this paper, a multimodal medical image registration framework based on few-shot learning is proposed, named reverse-net, which can improve the accuracy and generalization ability of the network by using a few segmentation labels. Firstly, we used the border enhancement network to enhance the ROI (region of interest) boundaries of T1 images to provide high-quality data for the subsequent pixel alignment stage. Secondly, through a coarse registration network, the T1 image and T2 image were roughly aligned. Then, the pixel alignment network generated more smooth deformation fields. Finally, the reverse teaching network used the warped T1 segmentation labels and warped images generated by the deformation field to teach the border enhancement network more structural knowledge. The performance and generalizability of our model have been evaluated on publicly available brain datasets including the MRBrainS13DataNii-Pro, SRI24, CIT168, and OASIS datasets. Compared with VoxelMorph, the reverse-net obtained performance improvements of 4.36% in DSC on the publicly available MRBrainS13DataNii-Pro dataset. On the unseen dataset OASIS, the reverse-net obtained performance improvements of 4.2% in DSC compared with VoxelMorph, which shows that the model can obtain better generalizability. The promising performance on dataset CIT168 indicates that the model is practicable.

  • Conference Article
  • Cite Count Icon 1
  • 10.1109/nrsc.2013.6587923
C16. Multimodal Medical Image Registration Approach Using an Artificial Immune System for Noisy and Partial Data
  • Apr 1, 2013
  • Osama A Omer + 1 more

Improvement of medical diagnosis, aided computer surgeries and tumor identification requires an accurate image registration approaches. The registration of multimodal medical images is more complicated than the registration of unimodal medical images due to the variation in luminance between the images. In this paper, an accurate multimodal image registration approach using artificial immune system (AIS) is proposed and the affine transformation model is used in contrast to the most of the related works which assumed rigid transformation model or similarity transformation model. In the proposed approach the LL bands of the discrete wavelet transform (DWT) for the images are used and the normalized mutual information (NMI) is used as a fitness function. The proposed approach achieves good result in the case of noiseless images, noisy images and partial data loss from one of the images. Moreover, the proposed approach does not need any feature extraction or refinement step. To demonstrate the robustness of the proposed approach, it has been compared with two multimodal medical image registration approaches.

  • Book Chapter
  • 10.1007/978-3-031-26507-5_8
A Systematic Literature Review on Multi-modal Medical Image Registration
  • Jan 1, 2023
  • Marwa Chaabane + 1 more

Context: In today’s health care, multi-modal image registration increasingly important role in medical analysis and diagnostics. Multi-modal image registration is a challenging task because of the different imaging conditions that changes from one imaging modality to another. Objective: The purpose of this work is to determine the current state of the art in the field of medical image registration shedding light on techniques that have been used to register medical image combinations from different modalities and the importance of combining different modalities in automatic way in the medical domain. Method: To fulfill this objective we chose a Systematic Literature Review (SLR) as method to follow. Which allows to collect and structure the information that exists in the field of multi-modal image registration. Results: Several automatic solutions based on different registration techniques were proposed according to each specific modality combination. Conclusion: The results provide the following conclusions: First, the machine learning in the recent years plays an important role in the automatic registration process. An important number of research propose a learning-based registration solution. Second, There few solutions in literature that tackle the automatic registration of histology - CT modality combination. Finally, the existing research work propose registration solutions for only combination of two modalities. A very few number of work suggest a tri-modality combining.

  • Research Article
  • Cite Count Icon 9
  • 10.1007/s11548-015-1219-9
Multimodal image registration with joint structure tensor and local entropy.
  • May 28, 2015
  • International journal of computer assisted radiology and surgery
  • Jingya Zhang + 3 more

Nonrigid registration of multimodal medical images remains a challenge in image-guided interventions. A common approach is to use mutual information (MI), which is robust to the intensity variations across modalities. However, primarily based on intensity distribution, MI does not take into account of underlying spatial and structural information of the images, which might lead to local optimization. To address such a challenge, this paper proposes a two-stage multimodal nonrigid registration scheme with joint structural information and local entropy. In our two-stage multimodal nonrigid registration scheme, both the reference image and floating image are firstly converted to a common space. A unified representation in the common space for the images is constructed by fusing the structure tensor (ST) trace with the local entropy (LE). Through the representation that reflects its geometry uniformly across modalities, the complicated deformation field is estimated using L(1) or L(2) distance. We compared our approach to four other methods: (1) the method using LE, (2) the method using ST, (3) the method using spatially weighted LE and (4) the conventional MI-based method. Quantitative evaluations on 80 multimodal image pairs of different organs including 50 pairs of MR images with artificial deformations, 20 pairs of medical brain MR images and 10 pairs of breast images showed that our proposed method outperformed the comparison methods. Student's t test demonstrated that our method achieved statistically significant improvement on registration accuracy. The two-stage registration with joint ST and LE outperformed the conventional MI-based method for multimodal images. Both the ST and the LE contributed to the improved registration accuracy.

  • Research Article
  • 10.1016/j.media.2025.103844
ARDMR: Adaptive recursive inference and representation disentanglement for multimodal large deformation registration.
  • Jan 1, 2026
  • Medical image analysis
  • Yibo Hu + 7 more

ARDMR: Adaptive recursive inference and representation disentanglement for multimodal large deformation registration.

  • Book Chapter
  • Cite Count Icon 43
  • 10.1007/978-0-387-09749-7_15
Non-rigid registration using free-form deformations
  • Jan 1, 2015
  • D Rueckert + 1 more

Free-form deformations are a powerful geometric modeling technique which can be used to represent complex 3D deformations. In recent years, free-form deformations have gained significant popularity in algorithms for the non-rigid registration of medical images. In this chapter we show how free-form deformations can be used in non-rigid registration to model complex local deformations of 3D organs. In particular, we discuss diffeomorphic and non-diffeomorphic representations of 3D deformation fields using free-form deformations as well as different penalty functions that can be used to constrain the deformation fields during the registration. We also show how free-form deformations can be used in combination with mutual information-based similarity metrics for the registration of mono-modal and multi-modal medical images. Finally, we discuss applications of registration techniques based on free-form deformations for the analysis of images of the breast, heart and brain as well as for segmentation and shape modelling.

  • Research Article
  • Cite Count Icon 3
  • 10.1088/1361-6560/ad120e
SOLID: a novel similarity metric for mono-modal and multi-modal deformable image registration
  • Dec 26, 2023
  • Physics in Medicine & Biology
  • Paris Tzitzimpasis + 3 more

Medical image registration is an integral part of various clinical applications including image guidance, motion tracking, therapy assessment and diagnosis. We present a robust approach for mono-modal and multi-modal medical image registration. To this end, we propose the novel shape operator based local image distance (SOLID) which estimates the similarity of images by comparing their second-order curvature information. Our similarity metric is rigorously tailored to be suitable for comparing images from different medical imaging modalities or image contrasts. A critical element of our method is the extraction of local features using higher-order shape information, enabling the accurate identification and registration of smaller structures. In order to assess the efficacy of the proposed similarity metric, we have implemented a variational image registration algorithm that relies on the principle of matching the curvature information of the given images. The performance of the proposed algorithm has been evaluated against various alternative state-of-the-art variational registration algorithms. Our experiments involve mono-modal as well as multi-modal and cross-contrast co-registration tasks in a broad variety of anatomical regions. Compared to the evaluated alternative registration methods, the results indicate a very favorable accuracy, precision and robustness of the proposed SOLID method in various highly challenging registration tasks.

  • Research Article
  • Cite Count Icon 23
  • 10.1016/0167-8655(94)90137-6
Registration of 3D multi-modality medical images using surfaces and point landmarks
  • May 1, 1994
  • Pattern Recognition Letters
  • André Collignon + 3 more

Registration of 3D multi-modality medical images using surfaces and point landmarks

  • Conference Article
  • Cite Count Icon 8
  • 10.1109/icdsp.2002.1027909
Multi-modal medical image registration: from information theory to optimization objective
  • Jul 1, 2002
  • T Butz + 2 more

A relatively large class of information theoretical measures, including e.g. mutual information or normalized entropy, has been used in multi-modal medical image registration. Even though the mathematical foundations of the different measures were very similar, the final expressions turned out to be surprisingly different. Therefore one of the main aims of this paper is to enlight the relationship of different objective functions by introducing a mathematical framework from which several known optimization objectives can be deduced. Furthermore we extend existing measures in order to be applicable on image features different than image intensities and introduce efficiency as a very general concept to qualify such features. The presented framework is very general and not at all restricted to medical images. Still we want to discuss the possible impact of our theoretical framework for the particular problem of medical image registration, where the feature space has traditionally been fixed to image intensities. Our theoretical approach is very general though and can be used for any kind of multi-modal signals, such as for the broad field of multi-media applications.

  • Research Article
  • 10.4103/digm.digm_39_17
Nonrigid registration of multimodal medical images based on hybrid model
  • Oct 1, 2017
  • Digital Medicine
  • Nuo Tong + 4 more

Background and Objectives: Multimodal image registration is a crucial step in prostate cancer radiation therapy scheme. However, it can be challenging due to the obvious appearance difference between computed tomography (CT) and magnetic resonance imaging (MRI) and unavoidable organ motion. Accordingly, a nonrigid registration framework for precisely registering multimodal prostate images is proposed in this paper. Materials and Methods: In this work, multimodal prostate image registration between CT and MRI is achieved using a hybrid model that integrates multiresolution strategy and Demons algorithm. Furthermore, to precisely describe the deformation of prostate, B-spline-based registration is utilized to refine the initial registration result of multiresolution Demons algorithm. Results: To evaluate our method, experiments on clinical prostate data sets of nine participants and comparison with the conventional Demons algorithm are conducted. Experimental results demonstrate that the proposed registration method outperforms the Demons algorithm by a large margin in terms of mutual information and correlation coefficient. Conclusions: These results show that our method outperforms the Demons algorithm and can achieve excellent performance on multimodal prostate images even the appearances of prostate change significantly. In addition, the results demonstrate that the proposed method can help to localize the prostate accurately, which is feasible in clinical.

  • Research Article
  • 10.1007/s12539-025-00707-5
DSMR: Dual-Stream Networks with Refinement Module for Unsupervised Multi-modal Image Registration.
  • Apr 19, 2025
  • Interdisciplinary sciences, computational life sciences
  • Lei Li + 5 more

Multi-modal medical image registration aims to align images from different modalities to establish spatial correspondences. Although deep learning-based methods have shown great potential, the lack of explicit reference relations makes unsupervised multi-modal registration still a challenging task. In this paper, we propose a novel unsupervised dual-stream multi-modal registration framework (DSMR), which combines a dual-stream registration network with a refinement module. Unlike existing methods that treat multi-modal registration as a uni-modal problem using a translation network, DSMR leverages the moving, fixed and translated images to generate two deformation fields. Specifically, we first utilize a translation network to convert a moving image into a translated image similar to a fixed image. Then, we employ the dual-stream registration network to compute two deformation fields respectively: the initial deformation field generated from the fixed image and the moving image, and the translated deformation field generated from the translated image and the fixed image. The translated deformation field acts as a pseudo-ground truth to refine the initial deformation field and mitigate issues such as artificial features introduced by translation. Finally, we use the refinement module to enhance the deformation field by integrating registration errors and contextual information. Extensive experimental results show that our DSMR achieves exceptional performance, demonstrating its strong generalization in learning the spatial relationships between images from unsupervised modalities. The source code of this work is available at https://github.com/raylihaut/DSMR .

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon