Background and objective:Transcranial focused ultrasound (tFUS) is an emerging non-invasive therapeutic technology that offers new brain stimulation modality. Precise localization of the acoustic focus to the desired brain target throughout the procedure is needed to ensure the safety and effectiveness of the treatment, but acoustic distortion caused by the skull poses a challenge. Although computational methods can provide the estimated location and shape of the focus, the computation has not reached sufficient speed for real-time inference, which is demanded in real-world clinical situations. Leveraging the advantages of deep learning, we propose multi-modal networks capable of generating intracranial pressure map in real-time. Methods:The dataset consisted of free-field pressure maps, intracranial pressure maps, medical images, and transducer placements was obtained from 11 human subjects. The free-field and intracranial pressure maps were computed using the k-space method. We developed network models based on convolutional neural networks and the Swin Transformer, featuring a multi-modal encoder and a decoder. Results:Evaluations on foreseen data achieved high focal volume conformity of approximately 93% for both computed tomography (CT) and magnetic resonance (MR) data. For unforeseen data, the networks achieved the focal volume conformity of 88% for CT and 82% for MR. The inference time of the proposed networks was under 0.02 s, indicating the feasibility for real-time simulation. Conclusions:The results indicate that our networks can effectively and precisely perform real-time simulation of the intracranial pressure map during tFUS applications. Our work will enhance the safety and accuracy of treatments, representing significant progress for low-intensity focused ultrasound (LIFU) therapies.
Read full abstract