Abstract

High-quality and high-resolution medical images can help doctors make more accurate diagnoses, but the resolution of medical images is often limited by a variety of factors such as device, operation and compression rate. To deal with this issue, in this paper, we propose a novel densely connected network for super-resolution reconstruction of 3D medical images. In order to obtain multiscale information, we first adopt 3D dilated convolution with different dilation rates to extract shallow features. To better handle these hierarchical features, we combine local residual learning with densely connected layers, which apply 3D asymmetric convolution to improve performance without increasing inference time. Meanwhile, an improved attention module, which considers both channel-wise and spatial information, is applied to enhance attention of the channels and regions with more high-frequency details. Finally, a feature fusion module which contains three parallel dilated convolution is applied to fuse hierarchical features. Compared with the state-of-the-art methods, such as SRCNN, FSRCNN, SRResnet, DCSRN, ReCNN and DCED, our experimental results show that the proposed method has better performance in both objective metrics and visual effect.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call