Abstract

The problem of respiratory sound classification has received good attention from the clinical scientists and medical researcher’s community in the last year to the diagnosis of COVID-19 disease. The Artificial Intelligence (AI) based models deployed into the real-world to identify the COVID-19 disease from human-generated sounds such as voice/speech, dry cough, and breath. The CNN (Convolutional Neural Network) is used to solve many real-world problems with Artificial Intelligence (AI) based machines. We have proposed and implemented a multi-channeled Deep Convolutional Neural Network (DCNN) for automatic diagnosis of COVID-19 disease from human respiratory sounds like a voice, dry cough, and breath, and it will give better accuracy and performance than previous models. We have applied multi-feature channels such as the data De-noising Auto Encoder (DAE) technique, GFCC (Gamma-tone Frequency Cepstral Coefficients), and IMFCC (Improved Multi-frequency Cepstral Coefficients) methods on augmented data to extract the deep features for the input of the CNN. The proposed approach improves system performance to the diagnosis of COVID-19 disease and provides better results on the COVID-19 respiratory sound dataset.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.