Abstract

Background and objectiveCough is a common symptom of respiratory diseases and the sound of cough helps in understanding the condition of the respiratory system. Objective artificial intelligence driven cough sound evaluation has the potential to aid clinicians in diagnosing respiratory diseases. Automatic cough sound detection is an important step in performing objective cough sound analysis. Current methods in automatic cough sound detection involves various signal transformation and feature engineering steps which are not only complex, but can also lead to loss of signal characteristics and thereby suboptimal classification performance. This work aims to develop algorithms for robust cough sound detection directly from the audio recordings. MethodsThe proposed method utilizes SincNet, a one-dimensional convolutional neural network that uses sinc functions in the first convolutional layer to discover meaningful filters in the audio signal, and bidirectional gated recurrent unit, a type of recurrent neural network to learn the bidirectional temporal dependencies between the sequences in the audio signal. The filter parameters of the SincNet are initialized using the model of the human auditory filters. The proposed approach is evaluated on a manually annotated dataset of 400 audio recordings, containing more than 72,000 cough and non-cough frames. ResultsA validation accuracy of 0.9509 (AUC = 0.9903) and test accuracy of 0.9496 (AUC = 0.9866) is achieved in detecting cough and non-cough frames in the audio recordings using the proposed method. ConclusionThe proposed cough detection approach forgoes the need for signal transformation and feature engineering and outperforms multiple baseline methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call