Abstract

Objective. Quantitative susceptibility mapping (QSM) is a new imaging technique for non-invasive characterization of the composition and microstructure of in vivo tissues, and it can be reconstructed from local field measurements by solving an ill-posed inverse problem. Even for deep learning networks, it is not an easy task to establish an accurate quantitative mapping between two physical quantities of different units, i.e. field shift in Hz and susceptibility value in ppm for QSM. Approach. In this paper, we propose a spatially adaptive regularization based three-dimensional reconstruction network SAQSM. A spatially adaptive module is specially designed and a set of them at different resolutions are inserted into the network decoder, playing a role of cross-modality based regularization constraint. Therefore, the exact information of both field and magnitude data is exploited to adjust the scale and shift of feature maps, and thus any information loss or deviation occurred in previous layers could be effectively corrected. The network encoding has a dynamic perceptual initialization, which enables the network to overcome receptive field intervals and also strengthens its ability to detect features of various sizes. Main results. Experimental results on the brain data of healthy volunteers, clinical hemorrhage and simulated phantom with calcification demonstrate that SAQSM can achieve more accurate reconstruction with less susceptibility artifacts, while perform well on the stability and generalization even for severe lesion areas. Significance. This proposed framework may provide a valuable paradigm to quantitative mapping or multimodal reconstruction.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.