Abstract

Depression is a serious mental disorder that has received increased attention from society. Due to the advantage of easy acquisition of speech, researchers have tried to propose various automatic depression recognition algorithms based on speech. Feature selection and algorithm design are the main difficulties in speech-based depression recognition. In our work, we propose the spatial-temporal feature network (STFN) for depression recognition, which can capture the long-term temporal dependence of audio sequences. First, to obtain a better feature representation for depression analysis, we develop a self-supervised learning framework, called Vector Quantized Wav2vec Transformer Net (VQWTNet) to map speech features and phonemes with transfer learning. Second, the stacked gated residual blocks in SFNet are used in the model to integrate causal and dilated convolutions to capture multiscale contextual information by continuously expanding the receptive field. In addition, instead of LSTM, our method employs the hierarchical contrastive predictive coding (HCPC) loss in HCPCNet to capture the long-term temporal dependencies of speech, reducing the number of parameters while making the network easier to train. Finally, experimental results on DAIC-WOZ (AVEC 2017) and E-DAIC (AVEC 2019) show that the proposed model significantly improves the accuracy of depression recognition. On both datasets, the performance of our method far exceeds the baseline and achieves competitive results compared to state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call