Abstract

COVID pandemic across the world and the emergence of new variants have intensified the need to identify COVID-19 cases quickly and efficiently. In this paper, a novel dual-mode multi-modal approach is presented to detect a covid patient. This has been done using the combination of image of the chest X-ray/CT scan and the clinical notes provided with the scan. Data augmentation techniques are used to extrapolate the dataset. Five different types of image and text models have been employed, including transfer learning. The binary cross entropy loss function and the adam optimizer are used to compile all of these models. The multi-modal is also tried out with existing pre-trained models such as: VGG16, ResNet50, InceptionResNetV2 and MobileNetV2. The final multi-modal gives an accuracy of 97.8% on the testing data. The study provides a different approach to identifying COVID-19 cases using just the scan images and the corresponding notes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call