Abstract

We propose a system that structures a meeting log by detecting and tagging the participants’ actions in the meeting using acceleration sensors. The proposed system detects head movement such as nodding of each participant or motion during utterances by using acceleration sensors attached to the heads of all participants in a meeting. As an evaluation result of detection of utterance and three types of head movement, which are nodding, looking around, and tilting one’s head, the recognition accuracy of utterance was 44.4%, nodding was 38.9%, and looking around was 39.4%. In addition, we developed a Meeting Review Tree, which is an application that recognizes a meeting participants’ utterances and three kinds of actions using acceleration and angular velocity sensors and tags them to recorded movies. In the proposed system, the structure of the meeting is hierarchized into three layers and tagged contexts as follows: The first layer represents the transition of the reporter during the meeting, the second layer represents changes in information of speakers in the report, and the third layer represents motions such as nodding. As a result of the evaluation experiment, the recognition accuracy of the stratified first layer was 57.0% and that of the second layer was 61.0%. The recognition accuracies of the utterance and motions were improved by the machine learning approach. The utterance recognition rate was 97.5%. The motion recognition rate was 52.4% for nodding and 53.6% for looking around.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call