Abstract

Multimodal communicative behaviours depend on numerous factors such as the communicative situation, the task, the culture and respective relationship of the people involved, their role, age, and background. This paper addresses the identification of the producers of co-occurring communicative non-verbal behaviours in a manually annotated multimodal corpus of spontaneous conversations. The work builds upon a preceding study in which a support vector machine was trained to identify the producers of communicative body behaviours using the annotations of individual behaviour types. In the present work, we investigate to which extent classification results can be improved adding to the training data the shape description of co-occurring body behaviours and temporal information. The inclusion of co-occurring behaviours reflects the fact that people often use more body behaviours at the same time when they communicate. The results of the classification experiments show that the identification of the producers of communicative behaviours improves significantly if co-occurring behaviours are added to the training data. Classification performance further improves when it also uses temporal information. Even though the results vary from body type to body type, they all show that the individual variation of communicative behaviours is large even in a very homogeneous group of people and that this variation is better modelled using information on co-occurring behaviours than individual behaviours. Being able to identify and then react correctly to individual behaviours of people is extremely important in the field of social robotics which involves the use of robots in private homes where they must interact in a natural way with different types of persons having varying needs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call