Abstract

Previous studies used subjective questionnaire to evaluate the acoustic environment and soundscape. In this paper, the facial expressions of the subjects during the test are recorded through the camera under 32 different sound events. Based on a machine learning method, the emotion through facial expression recognition are recognized. The results showed that there were significant differences in the subjects' emotion with disgust and pleasure to these 32 sound events. The change of the subject's emotion over time, the influence of age and gender were discussed. This method provides a valuable reference for the subjective evaluation of acoustic environment and the study of soundscape.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call