Abstract

We present a novel toolbox to generate computed tomography (CT) images from anatomical T1-weighted magnetic resonance (MR) images for use in planning transcranial ultrasound stimulation (TUS) studies in humans (https://github.com/sitiny/mr-to-pct). Our validated open-source toolbox is aimed at researchers interested in generating bespoke skull models for use in acoustic simulations in TUS studies. Importantly, researchers planning TUS studies will most likely already have access to standard T1-weighted MR images for neuronavigation, making our toolbox compatible with many existing protocols for TUS studies. Low-intensity focused TUS is an emerging technique for non-invasive neuromodulation that offers unprecedented spatial specificity compared with established techniques [[1]Darmani G. Bergmann T.O. Butts Pauly K. Caskey C.F. de Lecea L. Fomenko A. et al.Non-invasive transcranial ultrasound stimulation for neuromodulation.Clin Neurophysiol. 2022; 135: 51-73https://doi.org/10.1016/j.clinph.2021.12.010Crossref PubMed Scopus (20) Google Scholar]. The skull accounts for the bulk of transcranial ultrasound attenuation and aberration. Simulations of transcranial ultrasound wave propagation via accurate, individualised skull models of bone density and geometry are an essential component in planning TUS experiments to ensure that we can safely, precisely, and efficiently deliver energy to the target. CT images are considered the gold standard for skull imaging, and CT Hounsfield units (HU) have been used to estimate skull acoustic properties in ex vivo experiments [[2]Aubry J.-F. Tanter M. Pernot M. Thomas J.-L. Fink M. Experimental demonstration of noninvasive transskull adaptive focusing based on prior computed tomography scans.J Acoust Soc Am. 2003; 113: 84-93https://doi.org/10.1121/1.1529663Crossref PubMed Scopus (385) Google Scholar]. However, obtaining CT images in research participants may be prohibitive due to exposure to ionising radiation and limited access to CT scanners within research groups. Alternative methods to estimate the skull include short echo time (TE) MR imaging sequences [[3]Leung S.A. Moore D. Gilbo Y. Snell J. Webb T.D. Meyer C.H. et al.Comparison between MR and CT imaging used to correct for skull-induced phase aberrations during transcranial focused ultrasound.Sci Rep. 2022; 1213407https://doi.org/10.1038/s41598-022-17319-4Crossref Scopus (1) Google Scholar], and deep learning convolutional neural networks (CNNs) for translating MR to CT images [4Su P. Guo S. Roys S. Maier F. Bhat H. Melhem E.R. et al.Transcranial MR imaging-guided focused ultrasound interventions using deep learning synthesized CT.Am J Neuroradiol. 2020; 41: 1841-1848https://doi.org/10.3174/ajnr.A6758Crossref PubMed Scopus (12) Google Scholar, 5Koh H. Park T.Y. Chung Y.A. Lee J. Kim H. Acoustic simulation for transcranial focused ultrasound using GAN-based synthetic CT.IEEE J Biomed Heal Informatics. 2021; 2194: 1-11https://doi.org/10.1109/JBHI.2021.3103387Crossref Scopus (6) Google Scholar, 6Miscouridou M. Pineda-Pardo J.A. Stagg C.J. Treeby B.E. Stanziola A. Classical and learned MR to pseudo-CT mappings for accurate transcranial ultrasound simulation.IEEE Trans Ultrason Ferroelectrics Freq Control. 2022; 69: 2896-2905https://doi.org/10.1109/TUFFC.2022.3198522Crossref PubMed Scopus (1) Google Scholar]. Uptake of these techniques among the growing community of TUS researchers remains low as these methods are not freely available or easily implemented. As a result, many low-intensity TUS studies either do not report transcranial simulations, apply a fixed percentage derating of free-water values, or base their simulations on binary skull models or an example CT from one individual. Our new open-source toolbox aims to fill this gap to facilitate reproducibility and best practice in TUS. We trained and validated a 3D residual U-Net to synthesise a 100 keV pseudo-CT image given a T1-weighted MR image as input. Our network was pre-trained on a large number of subjects (n = 110) [[7]Yaakub S.N. McGinnity C.J. Kerfoot E. Mérida I. Beck K. Dunston E. et al.Brain PET-MR attenuation correction with deep learning: method validation in adult and clinical paediatric data.ArXiv Prepr. 2022; : 1-10Google Scholar] to better capture inter-individual variability, potentially increasing generalisability to other datasets. We further refined and validated our network on a separate dataset (n = 37; CERMEP database [[8]Mérida I. Jung J. Bouvard S. Le Bars D. Lancelot S. Lavenne F. et al.CERMEP-IDB-MRXFDG: a database of 37 normal adult human brain [18F]FDG PET, T1 and FLAIR MRI, and CT images available for research.EJNMMI Res. 2021; 11https://doi.org/10.1186/s13550-021-00830-6Crossref PubMed Scopus (5) Google Scholar]) with different CT and MR properties to the pre-training dataset. The network was implemented in MONAI (https://monai.io/), an open-source deep learning framework for medical imaging [[9]Cardoso M.J. Li W. Brown R. Ma N. Kerfoot E. Wang Y. et al.MONAI: an open-source framework for deep learning in healthcare.ArXiv Prepr. 2022; Google Scholar]. We describe our main results below; full details of our methods and results are described in Supplementary Material for the interested reader.1.Our method generates pseudo-CT (Fig. 1a) with mean absolute errors (MAE = 109.8 ± 13.0) comparable to other CNN-based methods: c.f [[4]Su P. Guo S. Roys S. Maier F. Bhat H. Melhem E.R. et al.Transcranial MR imaging-guided focused ultrasound interventions using deep learning synthesized CT.Am J Neuroradiol. 2020; 41: 1841-1848https://doi.org/10.3174/ajnr.A6758Crossref PubMed Scopus (12) Google Scholar].: MAE (ultrashort-TE MR) = 104.57 ± 21.33 HU [[5]Koh H. Park T.Y. Chung Y.A. Lee J. Kim H. Acoustic simulation for transcranial focused ultrasound using GAN-based synthetic CT.IEEE J Biomed Heal Informatics. 2021; 2194: 1-11https://doi.org/10.1109/JBHI.2021.3103387Crossref Scopus (6) Google Scholar];: MAE (T1-weighted MR) = 85.72 ± 9.50 HU; and [[6]Miscouridou M. Pineda-Pardo J.A. Stagg C.J. Treeby B.E. Stanziola A. Classical and learned MR to pseudo-CT mappings for accurate transcranial ultrasound simulation.IEEE Trans Ultrason Ferroelectrics Freq Control. 2022; 69: 2896-2905https://doi.org/10.1109/TUFFC.2022.3198522Crossref PubMed Scopus (1) Google Scholar]: MAE (T1-weighted MR) = 133 ± 46, and MAE (zero-TE MR) = 83 ± 26. Notably, methods using short TE MR images performed better than those using T1-weighted MR. However, freely available databases of these, paired with reference CT images do not exist. Furthermore, most research groups will already have a standard T1-weighted MR sequence available.2.Our pseudo-CTs can be used to produce accurate acoustic simulations (Fig. 1b) with focal pressures statistically equivalent to simulations based on reference CT (0.48 ± 0.04 MPa and 0.50 ± 0.04 MPa respectively). This represents a large improvement over acoustic simulations based on binary skull masks (focal pressure = 0.28 ± 0.05 MPa; Fig. 1b). Binary skull models may be sufficient for estimating the location of the TUS focus but should not be relied on for assessing safety indices like mechanical index (MI) and the spatial peak pulse averaged intensity (ISPPA). It should be noted that refined skull models can be derived by further segmenting the skull into trabecular and cortical bone segments. However, this approach represents a coarse approximation (acoustic properties assigned to each segment as a whole, rather than continuous values) and relies on the precise separation of trabecular and cortical bone, which is difficult with T1-weighted MR images alone.3.Our network works better in novel data (i.e. data acquired independently of the training dataset) when the T1-weighted images are acquired with a sequence similar to that of the training dataset: the pseudo-CTs produced from T1-weighted MRs acquired at BRIC were qualitatively better than from the T1-weighted MRs acquired at the Donders Institute (Fig. 1c). This is possibly because the BRIC data were acquired without fat suppression (like the CERMEP data), while the Donders data were acquired with fat suppression. We would recommend that users apply the network on T1-weighted MR images acquired with a sequence similar to that used during training. Our trained network was limited by the resolution and availability of training data. Higher resolution data would no doubt enable acoustic simulations to be performed with smaller grid spacing for better simulation precision. MR sequences that have better separation between bone and cerebrospinal fluid (e.g. T2-weighted MR) or better separation between trabecular and cortical bone (e.g. short TE MR sequences) may increase the accuracy of the pseudo-CT produced [[6]Miscouridou M. Pineda-Pardo J.A. Stagg C.J. Treeby B.E. Stanziola A. Classical and learned MR to pseudo-CT mappings for accurate transcranial ultrasound simulation.IEEE Trans Ultrason Ferroelectrics Freq Control. 2022; 69: 2896-2905https://doi.org/10.1109/TUFFC.2022.3198522Crossref PubMed Scopus (1) Google Scholar]. The acquisition x-ray tube energy and reconstruction technique for CT images also affects the conversion of CT HU to skull acoustic velocity [[10]Webb T.D. Leung S.A. Rosenberg J. Ghanouni P. Dahl J.J. Pelc N.J. et al.Measurements of the relationship between CT Hounsfield units and acoustic velocity and how it changes with photon energy and reconstruction method.IEEE Trans Ultrason Ferroelectrics Freq Control. 2018; 65: 1111-1124https://doi.org/10.1109/TUFFC.2018.2827899Crossref PubMed Scopus (22) Google Scholar] and impacts on the accuracy of acoustic simulations. However, we were restricted to datasets available publicly. We chose to use T1-weighted MRs as these are the most readily available anatomical research sequence. Should researchers have access to other data, our network weights and training framework can be used in transfer learning applications to further optimise the network on their own datasets. In the spirit of Open Science and reproducibility, we provide our open-source MR to pseudo-CT toolbox (https://github.com/sitiny/mr-to-pct) and code to run k-Wave [[11]Treeby B.E. Cox B.T. k-Wave: MATLAB toolbox for the simulation and reconstruction of photoacoustic wave fields.J Biomed Opt. 2010; 15021314https://doi.org/10.1117/1.3360308Crossref PubMed Scopus (1247) Google Scholar] -based transcranial simulations of the ultrasound acoustic field and temperature as described in the Supplementary Methods (https://github.com/sitiny/BRIC_TUS_Simulation_Tools). Since its release, our toolbox has already been successfully used by several groups of TUS researchers worldwide. We hope our toolbox is useful to researchers in the design of safe and efficient TUS experiments, both to researchers new to the field of deep learning and acoustic simulations as well as to expert users as a basis for transfer learning applications on their own datasets. To our knowledge, there is currently no other openly available toolbox for this purpose. The code for generating pseudo-CT from T1-weighted MR images and for running the acoustic simulations as described in this work are available on GitHub: https://github.com/sitiny/mr-to-pct and https://github.com/sitiny/BRIC_TUS_Simulation_Tools. The pre-trained network weights and example data to use with our toolbox are available to download from: https://osf.io/e7sz9/. The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. E.F. and S.N.Y. are supported by the UKRI/MRC grant MR/T023007/1. This work was supported by the UK Department of Health via the NIHR Comprehensive Biomedical Research Centre Award (COV-LT-0009) to Guy's and St Thomas' NHS Foundation Trust (in partnership with King's College London and King's College Hospital NHS Foundation Trust), and by the Wellcome Engineering and Physical Sciences Research Council Centre for Medical Engineering at King's College London (WT 203148/Z/16/Z). CERMEP-IDB-MRXFDG Database (© Copyright CERMEP – Imagerie du vivant, www.cermep.fr and Hospices Civils de Lyon. All rights reserved.) provided jointly by CERMEP and Hospices Civils de Lyon (HCL) under free academic end user licence agreement. The following is the Supplementary data to this article. Download .pdf (.57 MB) Help with pdf files Multimedia component 1

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call