Abstract

Medical Image Super-Resolution plays a pivotal role in enhancing diagnostic accuracy. Transformer-based methods, such as Image Restoration Using Swin Transformer (SwinIR) and Swin transformer for fast Magnetic Resonance Imaging (SwinMR), have shown prowess in this area but also exhibit limitations. Specifically, LayerNorm channel normalization diminishes high-frequency detail, while the Multilayer Perceptron prioritizes global information over local information. Moreover, low-resolution inputs contain substantial low-frequency information, treated uniformly in self-attention, hampering the effectiveness of shifted window-based self-attention. To address these challenges, this study proposes a novel Asymmetric convolution Swin Transformer Layer that leverages global and local information within adjacent windows or pixels. Furthermore, this study presents an innovative joint attention mechanism that integrates channel attention into the Swin Transformer architecture, allowing for the simultaneous capture of local and global information and enhancing the network's ability to adapt and represent information effectively. Based on these components, this study develops a joint Channel Attention and Swin Transformer Residual Network (CSRNet) for medical image super-resolution. To evaluate its performance, this study conducts comprehensive experiments on established datasets consisting of high-resolution medical images. Compared to the state-of-the-art medical image super-resolution methods, the proposed method outperforms in reconstructing medical images across qualitative and quantitative metrics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call