Abstract

The coronavirus disease (COVID-19) first appeared at the end of December 2019 and is still spreading in most countries. To diagnose COVID-19 using reverse transcription - Polymerase chain reaction (RT-PCR), one has to go to a dedicated center, which requires significant cost and human resources. Hence, there is a requirement for a remote monitoring tool that can perform the preliminary screening of COVID-19. In this paper, we propose that a detailed audio texture analysis of COVID-19 sounds may help in performing the initial screening of COVID-19. The texture analysis is done on three different signal modalities of COVID-19, i.e. cough, breath, and speech signals. In this work, we have used 1141 samples of cough signals, 392 samples of breath signals, and 893 samples of speech signals. To analyze the audio textural behavior of COVID-19 sounds, the local binary patterns LBP) and Haralick’s features were extracted from the spectrogram of the signals. The textural analysis on cough and breath sounds was done on the following 5 classes for the first time: COVID-19 positive with cough, COVID-19 positive without cough, healthy person with cough, healthy person without cough, and an asthmatic cough. For speech sounds there were only two classes: COVID-19 positive, and COVID-19 negative. During experiments, 71.7% of the cough samples and 72.2% of breath samples were classified into 5 classes. Also, 79.7% of speech samples are classified into 2 classes. The highest accuracy rate of 98.9% was obtained when binary classification between COVID-19 cough and non-COVID-19 cough was done.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call