Abstract

Respiration is a vital process for all living organisms. In the diagnosis and the detection of many health problems, patient's respiration rate, breath inhalation, and breath exhalation conditions are primarily taken into consideration by doctors, clinicians, and healthcare staff. In this study, an interactive application is designed to collect audio signals, present visual information about them, create a novel 21253x20 audio signal dataset for the detection of breath inhalation and breath exhalation that can be performed through nose and mouth, and classify audio signals based on machine learning (ML) models as breath inhalation and breath exhalation. Audio signals are received from both volunteers’ hearts (method 1) and trachea (method 2). ML models as decision tree (DT), Naïve Bayes (NB), support vector machines (SVM), k-nearest neighbor (KNN), gradient boosted trees (GBT), random forest (RF), and artificial neural network model (ANN) are used on the created dataset to classify the received audio signals from nose and mouth into two different conditions. The highest sensitivity, specificity, accuracy, and Matthews correlation coefficient (MCC) for the classification of breath inhalation and breath exhalation are respectively obtained as 91.82%, 87.20%, 89.51%, and 0.79 by method 2 based on majority voting of KNN, RF, and SVM. This paper mainly focuses on usage of audio signals and ML models as a novel approach to classify respiratory conditions based on breath inhalation and breath exhalation via an interactive application. This paper uncovers that audio signals received from method 2 are more effective and eligible to extract information than audio signals received from method 1.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call