Abstract

The issue in respiratory sound classification has attained good attention from the clinical scientists and medical researcher's group in the last year to diagnosing COVID-19 disease. To date, various models of Artificial Intelligence (AI) entered into the real-world to detect the COVID-19 disease from human-generated sounds such as voice/speech, cough, and breath. The Convolutional Neural Network (CNN) model is implemented for solving a lot of real-world problems on machines based on Artificial Intelligence (AI). In this context, one dimension (1D) CNN is suggested and implemented to diagnose respiratory diseases of COVID-19 from human respiratory sounds such as a voice, cough, and breath. An augmentation-based mechanism is applied to improve the preprocessing performance of the COVID-19 sounds dataset and to automate COVID-19 disease diagnosis using the 1D convolutional network. Furthermore, a DDAE (Data De-noising Auto Encoder) technique is used to generate deep sound features such as the input function to the 1D CNN instead of adopting the standard input of MFCC (Mel-frequency cepstral coefficient), and it is performed better accuracy and performance than previous models.ResultsAs a result, around 4% accuracy is achieved than traditional MFCC. We have classified COVID-19 sounds, asthma sounds, and regular healthy sounds using a 1D CNN classifier and shown around 90% accuracy to detect the COVID-19 disease from respiratory sounds.ConclusionA Data De-noising Auto Encoder (DDAE) was adopted to extract the acoustic sound signals in-depth features instead of traditional MFCC. The proposed model improves efficiently to classify COVID-19 sounds for detecting COVID-19 positive symptoms.

Highlights

  • As of 23rd January 2021, the COVID-19 epidemic is declared as a pandemic by the World Health Organization (WHO) on March 11th, 2020, and it claims over 2,098,879 lives worldwide [1]

  • We briefly described the COVID19 data, COVID-19 Sound Analysis, and proposed 1D Convolutional Neural Network (CNN) approach in Section 1; Section 2 illustrates literature reviews of each author and background study for this research; Section 3 describes the proposed 1D convolutional method, dataset collection, and augmentation process; Section 4 outlines the result analysis and discussions concerning the proposed model, and we have concluded this research with the best accuracy

  • In [17], Bader M et al proposed a significant model with the combination of Mel-Frequency Cepstral Coefficients (MFCCs) and SSP (Speech Signal Processing) to extract samples from nonCOVID and COVID and find the person correlation from their relationship coefficients. These findings indicate high similarity between various breathing respiratory sounds and COVID cough sounds in Mel-frequency Cepstral Coefficient (MFCC), MFCC speech is more robust between non-COVID-19 samples and COVID-19 samples

Read more

Summary

Introduction

As of 23rd January 2021, the COVID-19 epidemic is declared as a pandemic by the World Health Organization (WHO) on March 11th, 2020, and it claims over 2,098,879 lives worldwide [1]. The problem of respiratory sound classification [2,3] and diagnosis of COVID-19 disease has received good attention from the clinical scientists and researchers community in the last year. In this situation, many AI-based models [4,5,6] entered into the real-world to solve such problems; and researchers have provided different machine learning, signal processing, and deep learning techniques to solve the real-world problem [7,8]. Instead of SVM, we proposed a 1D CNN and augmentation approach with a data de-noising method to classify and diagnose the COVID-19 disease

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call