Abstract

Artificial intelligence (AI)-assisted COVID-19 detection in chest computed tomography (CT) images has an important role in the early diagnosis and appropriate treatment of infected patients. Convolutional neural network based AI approaches have shown significant performance in segmenting COVID-19 lesion regions. However, they have several limitations to deal with the complexity of the lesion characteristics, low or high image contrast, and small lesion regions. To address these limitations, we propose a novel architecture called U-TranSvision, which leverages transformers and deep supervision to improve segmentation performance by focusing on the salient features of small COVID-19 lesions. Furthermore, Pix2Pix generative adversarial network was used in data augmentation to improve the performance of U-TranSvision, and pre-processing steps were applied to remove the noise around human tissue on an image. In addition, we created a relatively large-scale dataset of 11,717 axial chest CT images, along with their corresponding pixel-level annotations. Based on extensive experimental evaluations, U-TranSvision achieved a dice similarity coefficient of 85.57% and an intersection over Union of 74.82%. The experiments were also conducted on three publicly available datasets, such as COVID-19-CT-Seg, MosMedData, and MedSeg, to demonstrate the robustness of U-TranSvision. The qualitative and quantitative results proved that U-TranSvision had promising performance compared to the state-of-the-art architectures for COVID-19 lesion segmentation. In addition, U-TranSvision has relatively low learning parameters, which results in low computational costs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call