Abstract

In intelligent environments, computer systems not solely serve as passive input devices waiting for user interaction but actively analyze their environment and adapt their behaviour according to changes in environmental parameters. One essential ability to achieve this goal is to analyze the mood, emotions and dispositions a user experiences while interacting with such intelligent systems. Features allowing to infer such parameters can be extracted from auditive, as well as visual sensory input streams. For the visual feature domain, in particular facial expressions are known to contain rich information about a user's emotional state and can be detected by using either static and/or dynamic image features. During interaction facial expressions are rarely performed in isolation, but most of the time co-occur with movements of the head. Thus, optical flow based facial features are often compromised by additional motions. Parts of the optical flow may be caused by rigid head motions, while other parts reflect deformations resulting from facial expressivity (non-rigid motions). In this work, we propose the first steps towards an optical flow based separation of rigid head motions from non-rigid motions caused by facial expressions. We suggest that after their separation, both, head movements and facial expressions can be used as a basis for the recognition of a user's emotions and dispositions and thus allow a technical system to effectively adapt to the user's state.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call