Abstract Cough is the most common symptom of many respiratory diseases. Currently, no standardized methods exist for objective monitoring of cough, which could be commercially available and clinically acceptable. Our aim is to develop an algorithm which will be capable, according to the sound events analysis, to perform objective ambulatory and automated monitoring of frequency of cough. Because speech is the most common sound in 24-hour recordings, the first step for developing this algorithm is to distinguish between cough sound and speech. For this purpose we obtained recordings from 20 healthy volunteers. All subjects performed continuous reading of the text from the book with voluntary coughs at the indicated instants. The obtained sounds were analyzed using by linear and non-linear analysis in the time and frequency domain. We used the classification tree for the distinction between cough sound and speech. The median sensitivity was 100% and the median specificity was 95%. In the next step we enlarged the analyzed sound events. Apart from cough sounds and speech the analyzed sounds were induced sneezing, voluntary throat and nasopharynx clearing, voluntary forced ventilation, laughing, voluntary snoring, eructation, nasal blowing and loud swallowing. The sound events were obtained from 32 healthy volunteers and for their analysis and classification we used the same algorithm as in previous study. The median sensitivity was 86% and median specificity was 91%. In the final step, we tested the effectiveness of our developed algorithm for distinction between cough and non-cough sounds produced during normal daily activities in patients suffering from respiratory diseases. Our study group consisted from 9 patients suffering from respiratory diseases. The recording time was 5 hours. The number of coughs counted by our algorithm was compared with manual cough counts done by two skilled co-workers. We have found that the number of cough analyzed by our algorithm and manual counting, as well, were disproportionately different. For that reason we have used another methods for the distinction of cough sound from non-cough sounds. We have compared the classification tree and artificial neural networks. Median sensitivity was increasing from 28% (classification tree) to 82% (artificial neural network), while the median specificity was not changed significantly. We have enlarged our characteristic parameters of the Mel frequency cepstral coefficients, the weighted Euclidean distance and the first and second derivative in time. Likewise the modification of classification algorithm is under our interest
Read full abstract