Abstract

Recently, the flow state, a state in which individuals perform at the peak of their ability and are completely immersed in the task while experiencing a state of elatedness, has been the subject of active research. We introduce a novel approach of using convolutional neural networks to recognize flow in live performing musicians from analyzing their facial expression. A modified and partially re-trained version of the popular ResNet-50 architecture is employed for binary classification of flow, achieving a detection accuracy of 77.55%. This is done on labelled YouTube video-data of musicians with a labeling strategy that was verified through a perception experiment. Maximum accuracy within a 5-fold cross-validation is 74.98% with the mean exhibiting an accuracy of 65.10%. The results indicate that the state of flow is indeed recognizable through facial expressions of musicians. In addition, the utility of the presented model is demonstrated in two exemplary applications: Predicting the popularity of YouTube videos based on flow recognized in the faces through our system and correlating flow and six discrete emotions (neutral, happy, angry, fear, disgust, surprise).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call