Abstract

BackgroundMagnetic resonance imaging (MRI) is an essential part of assessing the extension of resection after craniotomy, allowing for accurate assessment of the brain. However, the application of MRI within 72 h after surgery is limited, owing to its high cost, long time duration, and patients’ limited mobility. Besides, MRI excludes people with contraindications, in which sceneries computerized tomography (CT) is the only available choice. To overcome these limitations, we investigated the use of a deep learning model for synthesizing post-operative MRI images. MethodsWe employed Cross-domain Correspondence Learning for Exemplar-based Image Translation Network (CoCosNet), an exemplar-based image translation model to synthesize T1 contrast-enhanced (T1ce) MRI images by combining post-operative T1ce and post-operative CT images. 233 cases were retrospectively collected at Sun Yat-sen University Cancer Center and underwent complete MRI and CT scans. ResultsAfter 200 epochs and a batch size of 10, CoCosNet achieved comparable results compared to other existing models, with objective indicators structural similarity index = 0.75, peak signal to noise ratio = 21.68, and mean absolute error = 0.007. Furthermore, assessments of the extension of resection between the true and synthesized T1ce images achieve great consistency. ConclusionsOur study demonstrates the effectiveness of the CoCosNet model in synthesizing post-operative MRI images from pre-operative MRI and post-operative CT scans. Quantitative analysis indicates that the synthesized images have comparable quality to real MRI images and can accurately discriminate the extent of resection. This approach has a promising future by providing a reliable alternative to traditional post-operative MRI.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call