Detection of depressive symptoms from spoken content has emerged as an efficient Artificial Intelligence (AI) tool for diagnosing this serious mental health condition. Since speech is a highly sensitive form of data, privacy-enhancing measures need to be in place for this technology to be useful. A common approach to enhance speech privacy is by using adversarial learning that involves concealing speaker’s specific attributes/identity while maintaining performance of the primary task. Although this technique works well for applications such as speech recognition, they are often ineffective for depression detection due to the interplay between certain speaker attributes and the performance of depression detection. This paper studies such interplay through a systematic study on how obfuscating specific speaker attributes (age, education) through adversarial learning impact the performance of a depression detection model. We highlight the relevance of two previously unexplored speaker attributes to depression detection, while considering a multimodal (audio-lexical) setting to highlight the relative vulnerabilities of the modalities under obfuscation. Results on a publicly available, clinically validated, depression detection dataset shows that attempts to disentangle age/education attributes through adversarial learning result in a large drop in depression detection accuracy, especially for the text modality. This calls for a revisit to how privacy mitigation should to be achieved for depression detection and any human-centric applications for that matter.
Read full abstract