Unpaired T1-weighted MRI synthesis from T2-weighted data using unsupervised learning.
Unpaired T1-weighted MRI synthesis from T2-weighted data using unsupervised learning.
- Research Article
97
- 10.1053/j.gastro.2004.09.028
- Nov 1, 2004
- Gastroenterology
Hepatocellular carcinoma (HCC), the most common primary hepatic malignancy, usually develops in patients with cirrhosis, growing sequentially from low-grade dysplastic nodules to frank malignant HCC. Its recognition is critical because curative treatment and prognosis require early diagnosis. Survival in patients with HCC relates directly to the number, size, and extent of lesions at diagnosis. Imaging of HCC is complicated because the tumor has a varied imaging appearance and frequently coexists with other cirrhotic nodules. Magnetic resonance imaging (MRI), the best available diagnostic technique, offers good contrast resolution and diagnostic sensitivity ranging from 33% to 77%. The main difficulty is not in diagnosing large tumors, but rather small tumors (<2 cm), because of considerable overlap on imaging between benign (regenerative), borderline (dysplastic), and malignant nodules. Increasing degrees of histological malignancy are associated with increasing arterialization and loss of portal blood supply; therefore, recognition of HCC requires dynamic imaging with gadolinium-enhanced T1-weighted sequence. Typically, HCC is a focal lesion with high signal intensity on T2-weighted images, variable signal intensity on T1-weighted images, intense arterial phase enhancement after gadolinium injection, and isointensity or hypointensity at the portal venous phase. The sensitivity of MRI for detecting small lesions is low, and improvement is still needed. Newer contrast agents, higher field strength (3 Tesla) imaging, and perfusion and diffusion MRI techniques possibly will provide greater sensitivity and specificity for detecting small HCCs in the future.
- Research Article
- 10.1038/s41598-025-03516-4
- May 26, 2025
- Scientific Reports
This study aims to develop a generative adversarial networks (GAN)-based image translation model for synthesizing lumbar spine Computed Tomography (CT) to Magnetic Resonance (MR) images, focusing on sagittal images, and to evaluate its performance. A cycle-consistent GAN was used to translate lumbar spine CT slices into synthetic T2-weighted MR images. The model was trained on a dataset of 100 cases with co-registered CT and MR images in the sagittal plane from patients with degenerative disease. A qualitative analysis was performed with 30 cases, using a similarity score to evaluate anatomical features by neurosurgeons. Quantitative metrics, including mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM), were also computed. The GAN model successfully generated synthetic T2-weighted MR images that visually resembled real MR images. In qualitative evaluation, the similarity score for anatomical features (e.g., disc signal, paraspinal muscles, facet joints) averaged over 80%. The disc signal showed the highest similarity at 88.11% ± 4.47%. In the quantitative assessment of sagittal images, the results were: MAE = 43.32 ± 10.29, PSNR = 12.80 ± 1.55, and SSIM = 0.28 ± 0.07. This approach could be valuable in clinical settings where MR image is unavailable, potentially reducing healthcare costs.
- Research Article
25
- 10.1176/appi.neuropsych.13.2.261
- May 1, 2001
- Journal of Neuropsychiatry
Neuropsychiatric Significance of Subcortical Hyperintensity
- Research Article
87
- 10.1148/radiol.14141242
- Dec 15, 2014
- Radiology
To develop and assess the diagnostic performance of a three-dimensional (3D) whole-body T1-weighted magnetic resonance (MR) imaging pulse sequence at 3.0 T for bone and node staging in patients with prostate cancer. MATERIALS AND METHODS This prospective study was approved by the institutional ethics committee; informed consent was obtained from all patients. Thirty patients with prostate cancer at high risk for metastases underwent whole-body 3D T1-weighted imaging in addition to the routine MR imaging protocol for node and/or bone metastasis screening, which included coronal two-dimensional (2D) whole-body T1-weighted MR imaging, sagittal proton-density fat-saturated (PDFS) imaging of the spine, and whole-body diffusion-weighted MR imaging. Two observers read the 2D and 3D images separately in a blinded manner for bone and node screening. Images were read in random order. The consensus review of MR images and the findings at prospective clinical and MR imaging follow-up at 6 months were used as the standard of reference. The interobserver agreement and diagnostic performance of each sequence were assessed on per-patient and per-lesion bases. The signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were significantly higher with whole-body 3D T1-weighted imaging than with whole-body 2D T1-weighted imaging regardless of the reference region (bone or fat) and lesion location (bone or node) (P < .003 for all). For node metastasis, diagnostic performance (area under the receiver operating characteristic curve) was higher for whole-body 3D T1-weighted imaging (per-patient analysis; observer 1: P < .001 for 2D T1-weighted imaging vs 3D T1-weighted imaging, P = .006 for 2D T1-weighted imaging + PDFS imaging vs 3D T1-weighted imaging; observer 2: P = .006 for 2D T1-weighted imaging vs 3D T1-weighted imaging, P = .006 for 2D T1-weighted imaging + PDFS imaging vs 3D T1-weighted imaging), as was sensitivity (per-lesion analysis; observer 1: P < .001 for 2D T1-weighted imaging vs 3D T1-weighted imaging, P < .001 for 2D T1-weighted imaging + PDFS imaging vs 3D T1-weighted imaging; observer 2: P < .001 for 2D T1-weighted imaging vs 3D T1-weighted imaging, P < .001 for 2D T1-weighted imaging + PDFS imaging vs 3D T1-weighted imaging). Whole-body MR imaging is feasible with a 3D T1-weighted sequence and provides better SNR and CNR compared with 2D sequences, with a diagnostic performance that is as good or better for the detection of bone metastases and better for the detection of lymph node metastases.
- Research Article
35
- 10.1148/radiol.2511071390
- Apr 1, 2009
- Radiology
A 28-year-old right-handed man presented with longstanding occipital headache, progressive ataxia, and blurred vision. He had begun vomiting 8-10 days earlier. He had no other important history. Neurologic examination revealed average intelligence and was remarkable for cerebellar ataxia. Papilledema was noted at fundoscopy. Magnetic resonance (MR) imaging of the brain was performed. No abnormality was noted on apparent diffusion coefficient maps or T2 * -weighted gradient-echo images (not shown).
- Research Article
6
- 10.1002/acm2.14120
- Aug 8, 2023
- Journal of Applied Clinical Medical Physics
Recent studies have raised broad safety and health concerns about using of gadolinium contrast agents during magnetic resonance imaging (MRI) to enhance identification of active tumors. In this paper, we developed a deep learning-based method for three-dimensional (3D) contrast-enhanced T1-weighted (T1) image synthesis from contrast-free image(s). The MR images of 1251 patients with glioma from the RSNA-ASNR-MICCAI BraTS Challenge 2021 dataset were used in this study. A 3D dense-dilated residual U-Net (DD-Res U-Net) was developed for contrast-enhanced T1 image synthesis from contrast-free image(s). The model was trained on a randomly split training set (n=800) using a customized loss function and validated on a validation set (n=200) to improve its generalizability. The generated images were quantitatively assessed against the ground-truth on a test set (n=251) using the mean absolute error (MAE), mean-squared error (MSE), peak signal-to-noise ratio (PSNR), structural similarity (SSIM), normalized mutual information (NMI), and Hausdorff distance (HDD) metrics. We also performed a qualitative visual similarity assessment between the synthetic and ground-truth images. The effectiveness of the proposed model was compared with a 3D U-Net baseline model and existing deep learning-based methods in the literature. Our proposed DD-Res U-Net model achieved promising performance for contrast-enhanced T1 synthesis in both quantitative metrics and perceptual evaluation on the test set (n=251). Analysis of results on the whole brain region showed a PSNR (in dB) of 29.882±5.924, a SSIM of 0.901±0.071, a MAE of 0.018±0.013, a MSE of 0.002±0.002, a HDD of 2.329±9.623, and a NMI of 1.352±0.091 when using only T1 as input; and a PSNR (in dB) of 30.284±4.934, a SSIM of 0.915±0.063, a MAE of 0.017±0.013, a MSE of 0.001±0.002, a HDD of 1.323±3.551, and a NMI of 1.364±0.089 when combining T1 with other MRI sequences. Compared to the U-Net baseline model, our model revealed superior performance. Our model demonstrated excellent capability in generating synthetic contrast-enhanced T1 images from contrast-free MR image(s) of the whole brain region when using multiple contrast-free images as input. Without incorporating tumor mask information during network training, its performance was inferior in the tumor regions compared to the whole brain which requires further improvements to replace the gadolinium administration in neuro-oncology.
- Research Article
56
- 10.1002/uog.5176
- Nov 12, 2007
- Ultrasound in Obstetrics & Gynecology
Magnetic resonance imaging examination of the fetal brain
- Discussion
29
- 10.1016/j.brs.2023.01.838
- Jan 1, 2023
- Brain Stimulation
Background: Individual skull models of bone density and geometry are important when planning the expected transcranial ultrasound acoustic field and estimating mechanical and thermal safety in low-intensity transcranial ultrasound stimulation (TUS) studies. Computed tomography (CT) images have typically been used to estimate skull acoustic properties. However, obtaining CT images in research participants may be prohibitive due to exposure to ionising radiation and limited access to CT scanners within research groups. Objective: We present a validated open-source tool for researchers to obtain individual skull estimates from T1-weighted MR images, for use in acoustic simulations. We refined a previously trained and validated 3D convolutional neural network (CNN) to generate 100 keV pseudo-CTs. The network was pretrained on 110 individuals and refined and tested on a database of 37 healthy control individuals. We compared simulations based on reference CTs to simulations based on our pseudo-CTs and binary skull masks, a common alternative in the absence of CT. Compared with reference CTs, our CNN produced pseudo-CTs with a mean absolute error of 109.8 ± 13.0 HU across the whole head and 319.3 ± 31.9 HU in the skull. In acoustic simulations, the focal pressure was statistically equivalent for simulations based on reference CT and pseudo-CT (0.48 ± 0.04 MPa and 0.50 ± 0.04 MPa respectively) but not for binary skull masks (0.28 ± 0.05 MPa). We show that our network can produce pseudo-CT comparable to reference CTs in healthy individuals, and that these can be used in acoustic simulations.
- Research Article
1
- 10.1002/mp.17723
- Feb 28, 2025
- Medical physics
The absence of tissue electron density information derived from greyscale Hounsfield units (HUs) in magnetic resonance imaging (MRI) limits its further clinical application in radiotherapy (RT). The use of synthetic computed tomography (sCT) with MRI simplifies RT treatment and improves positioning accuracy by eliminating the need for computed tomography (CT) simulation with radiation dose and error-prone image registration. Although CycleGAN and its variants can obtain verisimilar sCT through unsupervised learning, ensuring perfect structural consistency of the synthesized images in this approach remains challenging, and thus limiting the quality and diversity of the images synthesized for a given application. The purpose of this work is to develop a novel unsupervised boundary information-guided adversarial diffusion model, called RadADM, with the aim of enhancing performance in regard to unpaired MR-to-CT translation for MR-only RT. In order to explicitly guide the feature learning of the proposed RadADM model, the boundary mask information is incorporated as guidance for anatomy compensation during sCT generation from simulated MR images. In addition, a cycle-consistent module incorporates adversarial projections featuring coupled diffusive and non-diffusive architecture is used to facilitate training on unpaired MR-CT datasets, enabling accurate and efficient translation between the source and target domain images. To validate the performance of the proposed model, we conducted a comprehensive quantitative and qualitative comparison of RadADM with other state-of-the-art methods, including CycleGAN, CycleSlimulationGAN, CUT, Fixed Learned Self-Similarity (F-LseSim), and SynDiff. We evaluated and demonstrated that RadADM outperforms other comparative approaches for high-quality sCT generation on pelvic MRI datasets, captures high-quality local features, and achieves smaller errors with mean absolute error (MAE): 62.95±23.15 and root mean square error (RMSE): 135.46±23.89 and higher similarities with peak signal-to-noise ratio (PSNR): 24.70±0.52, structural similarity index (SSIM): 0.8673±0.01. For the region of soft-tissue, the PSNR and SSIM were 33.99±1.09 and 0.931±0.01, and for the region of bone, the PSNR and SSIM were 35.79±0.87 and 0.993±0.04. Extensive experiments on pelvic datasets demonstrate the effectiveness and robustness of our proposed RadADM in terms of enabling synthesizing sCT at the anatomical level. Our approach is found to offer a valuable and promising direction for clinical MR-only adaptive radiotherapy for pelvic cancer.
- Research Article
232
- 10.1148/radiol.12120281
- Nov 9, 2012
- Radiology
To retrospectively compare transition zone (TZ) cancer detection and localization accuracy of 3-T T2-weighted magnetic resonance (MR) imaging with that of multiparametric (MP) MR imaging, with radical prostatectomy specimens as the reference standard. The informed consent requirement was waived by the institutional review board. Inclusion criteria were radical prostatectomy specimen TZ cancer larger than 0.5 cm(3) and 3-T endorectal presurgery MP MR imaging (T2-weighted imaging, diffusion-weighted [DW] imaging apparent diffusion coefficient [ADC] maps [b < 1000 sec/mm(2)], and dynamic contrast material-enhanced [DCE] MR imaging). From 197 patients with radical prostatectomy specimens, 28 patients with TZ cancer were included. Thirty-five patients without TZ cancer were randomly selected as a control group. Four radiologists randomly scored T2-weighted and DW ADC images, T2-weighted and DCE MR images, and T2-weighted, DW ADC, and DCE MR images. TZ cancer suspicion was rated on a five-point scale in six TZ regions of interest (ROIs). A score of 4-5 was considered a positive finding. A score of 4 or higher for any ROI containing TZ cancer was considered a positive detection result at the patient level. Generalized estimating equations were used to analyze detection and localization accuracy by using ROI-receiver operating characteristics (ROC) curve analyses for the latter. Gleason grade (GG) 4-5 and GG 2-3 cancers were analyzed separately. Detection accuracy did not differ between T2-weighted and MP MR imaging for all TZ cancers (68% vs 66%, P = .85), GG 4-5 TZ cancers (79% vs 72%-75%, P = .13), and GG 2-3 TZ cancers (66% vs 62%-65%, P = .47). MP MR imaging (area under the ROC curve, 0.70-0.77) did not improve T2-weighted imaging localization accuracy (AUC = 0.72) (P > .05). Use of 3-T MP MR imaging, consisting of T2-weighted imaging, DW imaging ADC maps (b values, 50, 500, and 800 sec/mm(2)), and DCE MR imaging may not improve TZ cancer detection and localization accuracy compared with T2-weighted imaging. http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.12120281/-/DC1.
- Research Article
25
- 10.1002/mp.12406
- Jul 18, 2017
- Medical Physics
Accurate deformable image registration is necessary for longitudinal studies. The error associated with commercial systems has been evaluated using computed tomography (CT). Several in-house algorithms have been evaluated for use with magnetic resonance imaging (MRI), but there is still relatively little information about MRI deformable image registration. This work presents an evaluation of two deformable image registration systems, one commercial (Velocity) and one in-house (demons-based algorithm), with MRI using two different metrics to quantify the registration error. The registration error was analyzed with synthetic MR images. These images were generated from interpatient and intrapatient variation models trained on 28 patients. Four synthetic post-treatment images were generated for each of four synthetic pretreatment images, resulting in 16 image registrations for both the T1- and T2-weighted images. The synthetic post-treatment images were registered to their corresponding synthetic pretreatment image. The registration error was calculated between the known deformation vector field and the generated deformation vector field from the image registration system. The registration error was also analyzed using a porcine phantom with ten implanted 0.35-mm diameter gold markers. The markers were visible on CT but not MRI. CT, T1-weighted MR, and T2-weighted MR images were taken in four different positions. The markers were contoured on the CT images and rigidly registered to their corresponding MR images. The MR images were deformably registered and the distance between the projected marker location and true marker location was measured as the registration error. The synthetic images were evaluated only on Velocity. Root mean square errors (RMSEs) of 0.76 mm in the left-right (LR) direction, 0.76 mm in the anteroposterior (AP) direction, and 0.69 mm in the superior-inferior (SI) direction were observed for the T1-weighted MR images. RMSEs of 1.1 mm in the LR direction, 0.75 mm in the AP direction, and 0.81 mm in the SI direction were observed for the T2-weighted MR images. The porcine phantom MR images, when evaluated with Velocity, had RMSEs of 1.8, 1.5, and 2.7 mm in the LR, AP, and SI directions for the T1-weighted images and 1.3, 1.2, and 1.6 mm in the LR, AP, and SI directions for the T2-weighted images. When the porcine phantom images were evaluated with the in-house demons-based algorithm, RMSEs were 1.2, 1.5, and 2.1 mm in the LR, AP, and SI directions for the T1-weighted images and 0.81, 1.1, and 1.1 mm in the LR, AP, and SI directions for the T2-weighted images. The MRI registration error was low for both Velocity and the in-house demons-based algorithm according to both image evaluation methods, with all RMSEs below 3 mm. This implies that both image registration systems can be used for longitudinal studies using MRI.
- Research Article
116
- 10.1148/radiographics.19.suppl_1.g99oc03s161
- Oct 1, 1999
- RadioGraphics
Adenomyosis is a common gynecologic disorder that affects women during their menstrual life. Preoperative magnetic resonance (MR) images obtained in 45 patients with pathologically proved adenomyosis who underwent hysterectomy were retrospectively reviewed. Diffuse adenomyosis was seen in 30 cases (66.7%) and focal adenomyosis in 15 cases (33.3%). On T2-weighted MR images, diffuse adenomyosis usually manifested as diffuse thickening of the endometrial-myometrial junctional zone (7-37 mm; mean, 16 mm) with homogeneous low signal intensity. T2-weighted MR images were superior to contrast material-enhanced T1-weighted images in the evaluation of junctional zone thickening. High-signal-intensity foci were observed on T2-weighted images only in nine cases and on both T1- and T2-weighted images in three cases. Focal adenomyosis manifested on both T2-weighted and contrast-enhanced T1-weighted MR images as a localized, low-signal-intensity round or oval mass with a diameter of 2-7 cm (mean, 3.8 cm). All but one of the focal lesions had ill-defined margins. High-signal-intensity foci were noted in all cases of focal adenomyosis, either on T2-weighted images only (four cases) or on both T1- and T2-weighted images (11 cases). MR imaging is useful in diagnosing adenomyosis, differentiating adenomyosis from uterine myoma, and planning appropriate treatment.
- Research Article
37
- 10.1016/j.ijrobp.2021.11.007
- Nov 12, 2021
- International Journal of Radiation Oncology*Biology*Physics
Virtual Contrast-Enhanced Magnetic Resonance Images Synthesis for Patients With Nasopharyngeal Carcinoma Using Multimodality-Guided Synergistic Neural Network
- Research Article
1
- 10.1002/mp.17668
- Feb 4, 2025
- Medical physics
Although deep learning (DL) methods for reconstructing 3D magnetic resonance (MR) volumes from 2D MR images yield promising results, they require large amounts of training data to perform effectively. To overcome this challenge, fine-tuning-a transfer learning technique particularly effective for small datasets-presents a robust solution for developing personalized DL models. A 2D to 3D conditional generative adversarial network (GAN) model with a patient- and fraction-specific fine-tuning workflow was developed to reconstruct synthetic 3D MR volumes using orthogonal 2D MR images for online dose adaptation. A total of 2473 3D MR volumes were collected from 43 patients. The training and test datasets were separated into 34 and 9 patients, respectively. All patients underwent MR-guided adaptive radiotherapy using the same imaging protocol. The population data contained 2047 3D MR volumes from the training dataset. Population data were used to train the population-based GAN model. For each fraction of the remaining patients, the population model was fine-tuned with the 3D MR volumes acquired before beam irradiation of the fraction, named the fine-tuned model. The performance of the fine-tuned model was tested using the 3D MR volume acquired immediately after the beam delivery of the fraction. The model's input was a pair of axial and sagittal MR images at the isocenter level, and the output was a 3D MR volume. Model performance was evaluated using the structural similarity index measure (SSIM), peak signal-to-noise ratio (PSNR), root mean square error (RMSE), and mean absolute error (MAE). Moreover, the prostate, bladder, and rectum in the predicted MR images were manually segmented. To assess geometric accuracy, the 2D Dice Similarity Coefficient (DSC) and 2D Hausdorff Distance (HD) were calculated. A total of 84 3D MR volumes were included in the performance testing. The mean±standard deviation (SD) of SSIM, PSNR, RMSE, and MAE were 0.64±0.10, 93.9±1.5dB, 0.050±0.009, and 0.036±0.007 for the population model and 0.72±0.09, 96.2±1.8dB, 0.041±0.007, and 0.028±0.006 for the fine-tuned model, respectively. The image quality of the fine-tuned model was significantly better than that of the population model (p<0.05). The mean±SD of DSC and HD of the population model were 0.79±0.08 and 1.70±2.35mm for prostate, 0.81±0.10 and 2.75±1.53mm for bladder, and 0.72±0.08 and 1.93±0.59mm for rectum. Contrarily, the mean±SD of DSC and HD of the fine-tuned model were 0.83±0.06 and 1.29±0.77mm for prostate, 0.85±0.07 and 2.16±1.09mm for bladder, and 0.77±0.08 and 1.57±0.52mm for rectum. The geometric accuracy of the fine-tuned model was significantly improved than that of the population model (p<0.05). By employing a patient- and fraction-specific fine-tuning approach, the GAN model demonstrated promising accuracy despite limited data availability.
- Research Article
466
- 10.1016/j.juro.2011.07.013
- Sep 25, 2011
- Journal of Urology
Multiparametric 3T Prostate Magnetic Resonance Imaging to Detect Cancer: Histopathological Correlation Using Prostatectomy Specimens Processed in Customized Magnetic Resonance Imaging Based Molds
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.