Abstract

This paper presents a deep learning framework for detecting COVID-19 positive subjects from their cough sounds. In particular, the proposed approach comprises two main steps. In the first step, we generate a feature representing the cough sound by combining an embedding extracted from a pre-trained model and handcrafted features extracted from draw audio recording, referred to as the front-end feature extraction. Then, the combined features are fed into different back-end classification models for detecting COVID-19 positive subjects in the second step. Our experiments on the Track-2 dataset of the Second 2021 DiCOVA Challenge achieved the second top ranking with an AUC score of 81.21 and the top F1 score of 53.21 on a Blind Test set, improving the challenge baseline by 8.43% and 23.4% respectively and showing deployability, robustness and competitiveness with the state-of-the-art systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call