Abstract

Detection of meaningful meeting events is very important for cross-modal analysis of planning meetings. Many important events are related to speaker's communication behavior. In visual-audio based speaker detection, mouth positions and movements are needed as visual information. We present our techniques to detect mouth positions and movements of a talking person in meetings. First, we build a skin color model with the Gaussian distribution. After training with skin color samples, we obtain parameters for the model. A skin color filter is created corresponding to the model with a threshold. We detect face regions for all participants in the meeting. Second, We create a mouth template and perform image matching to find candidates of the mouth in each face region. Next, according to the fact that the skin color in lip areas is different from other areas in the face region, by comparing dissimilarities of skin color between candidates and the original color model, we decide the mouth area from the candidates. Finally, we detect mouth movements by computing normalized cross-correlation coefficients of mouth area between two successive frames. A real-time system has been implemented to track speaker's mouth positions and detection mouth movements. Applications also include video conferencing and improving human computer interaction (HCI). Examples in meeting environments and others are provided.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.