Abstract

The use of cough sounds as a diagnostic tool for various respiratory illnesses, including COVID-19, has gained significant attention in recent years. Artificial intelligence (AI) has been employed in cough sound analysis to provide a quick and convenient pre-screening tool for COVID-19 detection. However, few works have employed segmentation to standardize cough sounds, and most models are trained datasets from a single source. In this paper, a deep learning framework is proposed that uses the Mini VGGNet model and segmentation methods for COVID-19 detection using cough sounds. In addition, data augmentation was studied to investigate the effects on model performance when applied to individual cough sounds. The framework includes both single and cross-dataset model training and testing, using data from the University of Cambridge, Coswara project, and National Institute of Health (NIH) Malaysia. Results demonstrate that the use of segmented cough sounds significantly improves the performance of trained models. In addition, findings suggest that using data augmentation on individual cough sounds does not show any improvement towards the performance of the model. The proposed framework achieved an optimum test accuracy of 0.921, 0.973 AUC, 0.910 precision, and 0.910 recall, for a model trained on a combination of the three datasets using non-augmented data. The findings of this study highlight the importance of segmentation and the use of diverse datasets for AI-based COVID-19 detection through cough sounds. Furthermore, the proposed framework provides a foundation for extending the use of deep learning in detecting other pulmonary diseases and studying the signal properties of cough sounds from various respiratory illnesses.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call