Abstract

Medical imaging such as computed tomography (CT) plays a critical role in the global fight against COVID-19. Computer-aided platforms have emerged to help radiologists diagnose and track disease prognosis. In this paper, we introduce an automated deep-learning segmentation model, which builds upon the current U-net model, however, leverages the strengths of long and short skip connections. We complemented the long skip connections with a cascaded dilated convolution module that learns multi-scale context information, compensates the reduction in receptive fields, and reduces the disparity between encoded and decoded features. The short connections are considered in utilizing residual blocks as the basic building blocks for our model. They ease the training process, reduce the degradation problem, and propagate the low fine details. This enables the model to perform well in capturing smaller regions of interest. Furthermore, each residual block is followed by a squeeze and excitation unit, which stimulates informative features and suppresses less important ones, thus improving the overall feature representation. After extensive experimentation with a dataset of 1705 COVID-19 axial CT images, we demonstrate that performance gains can be achieved when deep learning modules are integrated with the basic U- net model. Experimental results show that our model outperformed the basic U-net and ResDUnet model by 8.1% and 1.9% in dice similarity, respectively. Our model provided a dice similarity measure of 85.3%, with a slight increase in trainable parameters, thus demonstrating a huge potential for use in the clinical domain.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.