Abstract

Estimating emotional states in music listening based on electroencephalogram (EEG) has been capturing the attention of researchers in the past decade. Although deep belief network (DBN) has witnessed the success in various domains including early works in emotion recognition based on EEG, it remains unclear whether DBN could improve emotion classification in music domains, especially in dynamic strategy that considers time-varying characteristics of emotion. This paper presents an early study of applying DBNs to improve emotion recognition in music listening where emotions were annotated continuously in time by subjects. Our subject-dependent results using stratified 10-fold cross-validation strategy suggested that DBNs could improve performance in valence classification with fractal dimension (FD), power spectral density (PSD), and discrete wavelet transform (DWT) features and improve performance in arousal classification with FD and DWT features. Furthermore, we found that the size of sliding window affected classification accuracies when using features in time (FD) and time-frequency (DWT) domains, while smaller window (1–4 seconds) could achieve higher performance compared with a larger window (5–8 seconds).

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.