Abstract

Physiological reports have confirmed that there are differences in speech signals between depressed and healthy individuals. Therefore, as an application in the field of affective computing, automatic depression level prediction through speech signals has received the attention of researchers, which often estimate the depression severity of individuals by the Fourier or Mel spectrograms of speech signals. However, some studies on speech emotion recognition suggest that directly modeling the raw speech signal is more helpful for extracting emotion-related information. Inspired by this fact, we develop a WavDepressionNet to model raw speech signals for the improvement of prediction accuracy. In our method, a representation block is proposed to find a set of basis vectors to construct the optimal transformation space and generate the transformation result (named Depression Feature Map, DFM) of speech signal for facilitating the perception of depression cues. We further propose an assessment block, which cannot only use the designed spatiotemporal self-calibration mechanism to calibrate the DFM and highlight the useful elements, but also aggregates the calibrated DFM across various temporal ranges with the dilated convolution. Experimental results on the AVEC 2013 and AVEC 2014 depression databases demonstrate the effectiveness of our approach over previous works.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call