Deformable registration of multimodal brain magnetic resonance images presents significant challenges, primarily due to substantial structural variations between subjects and pronounced differences in appearance across imaging modalities. Here, we propose to symmetrically register images from two modalities based on appearance residuals from one modality to another. Computed with simple subtraction between modalities, the appearance residuals enhance structural details and form a common representation for simplifying multimodal deformable registration. The proposed framework consists of three serially connected modules: (i) an appearance residual module, which learns intensity residual maps between modalities with a cycle-consistent loss; (ii) a deformable registration module, which predicts deformations across subjects based on appearance residuals; and (iii) a deblurring module, which enhances the warped images to match the sharpness of the original images. The proposed method, evaluated on two public datasets (HCP and LEMON), achieves the highest registration accuracy with topology preservation when compared with state-of-the-art methods. Our residual space-guided registration framework, combined with GAN-based image enhancement, provides an effective solution to the challenges of multimodal deformable registration. By mitigating intensity distribution discrepancies and improving image quality, this approach improves registration accuracy and strengthens its potential for clinical application.
Read full abstract