Abstract

Clinical data warehouses provide access to massive amounts of medical images and thus offer unprecedented opportunities for research. However, they also pose important challenges, a major challenge being their heterogeneity. In particular, they contain patients with numerous different diseases. The exploration of some neurological diseases with magnetic resonance imaging (MRI) requires injecting a gadolinium-based contrast agent (for instance to detect tumors or other contrast-enhancing lesions) while other diseases do not require such injection. Image harmonization is a key factor to enable unbiased differential diagnosis in such context. Additionally, classical neuroimaging software tools that extract features used as inputs of classification algorithms are typically applied only to images without gadolinium. The objective of this work is to homogenize images from a clinical data warehouse and enable the extraction of consistent features from brain MR images, no matter the initial presence or absence of gadolinium. We propose a deep learning approach based on a 3D U-Net to translate contrast-enhanced into non-contrast-enhanced T1-weighted brain MRI. The approach was trained/validated using 230 image pairs and tested on 26 image pairs of good quality and 51 image pairs of low quality from the data warehouse of the hospitals of the Greater Paris area (Assistance Publique-Hˆopitaux de Paris [AP-HP]). We tested two different 3D U-Net architectures and we chose the one reaching the best image similarity metrics for a further validation for a segmentation task. We tested two 3D U-Net architectures with the addition either of residual connections or of attention mechanisms. The U-Net with attention mechanisms reached the best image similarity metrics and was further validated on a segmentation task. We showed that features extracted from the synthetic images (gray matter, white matter and cerebrospinal fluid volumes) were closer to those obtained from the non-contrast-enhanced T1-weighted brain MRI (considered as reference) than the original, contrast-enhanced, images.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.