Abstract

High-resolution Magnetic Resonance Imaging (MRI) is pivotal in both diagnosing and treating brain tumors, assisting physicians in diagnosis and treatment by displaying anatomical structures. Utilizing convolutional neural network-based super-resolution methods enables the efficient acquisition of high-resolution MRI images.However, Convolutional neural networks are limited by their kernel size, which restricts their ability to capture a wider field of view, potentially leading to feature omission and difficulties in establishing global and local feature relationships. To overcome these shortcomings, We have designed a novel network architecture that highlights three main modules: (i) Multiple Convolutional Feature (MCF)extraction module, which diversifies convolution operations for extracting image features, achieving comprehensive feature representation. (ii) Multiple Groupss of Cross-Iterative Feature(MGCIF) modules, promoting inter-channel feature interactions and emphasizing crucial features needed for subsequent learning. (iii) A Graph Neural Network Module based on a sparse attention mechanism, capable of connecting distant pixel features and identifying influential neighboring pixels for target pixel inpainting. To evaluate the accuracy of our proposed network, we conducted tests on four datasets, comprising two sets of brain tumor data and two sets of healthy head MRI data, all of which underwent varying degrees of degradation. We conducted experiments using nineteen super-resolution (SR) models. Our experiments were carried out on four datasets, and the results demonstrate that our method outperforms the current leading-edge methods. In the four datasets, our model showed improvements in Peak Signal-to-Noise Ratio (PSNR) scores compared to the second-place model,with increase of 1.16 %,1.08 %,0.19 % and 0.53 % for ×2,and 2.26 %,1.67 %,0.13 % and 0.45 % for × 4,respectively.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.