Abstract

COVID-19 has beenone of the popular foci in the research community since its first outbreak in China, 2019. Radiological patternssuch as ground glass opacity (GGO) andconsolidations are often found inCT scan images ofmoderate to severe COVID-19 patients. Therefore, a deep learning model can be trained to distinguish COVID-19 patients using their CT scan images. Convolutional Neural Networks (CNNs) has been a popular choice for this type of classification task. Anotherpotential method is the use ofvisiontransformer with convolution, resulting in Convolutional Vision Transformer (ConViT), to possibly produce on par performance using less computational resources. In this study, ConViT is applied to diagnose COVID-19 cases from lung CT scan images. Particularly, we investigated the relationship of the input image pixel resolutions and the number of attention heads used in ConViT and their effects on the model’s performance.Specifically, we used 512x512, 224x224 and 128x128 pixels resolution to train the modelwith 4 (tiny), 9 (small) and 16 (base) number of attention heads used. An open access dataset consisting of 2282 COVID-19 CT images and 9776 Normal CT images from Iran is used in this study. Byusing 128x128 image pixels resolution,training using 16 attention heads, the ConViT modelhas achieved an accuracy of98.01%,sensitivity of90.83%, specificity of99.69%, positive predictive value (PPV) of95.58%, negative predictive value (NPV) of97.89%and F1-score of94.55%.The model has also achieved improvedperformance over other recent studiesthat usedthe same dataset.In conclusion, this study has shown that theConViTmodel can play a meaningful role to complement RT-PCR test on COVID-19 close contacts and patients.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call