Abstract

Recent advances in automated face-tracking have made it possible for laboratory phonologists to measure lip motion using technology no more advanced than a common webcam. The present paper introduces a lip-tracking approach specifically designed for use in web-based research. The central innovation is a custom extension written for jsPsych, an existing JavaScript framework for running behavioral experiments online. This extension gathers data from the participant’s webcam and processes them through FaceMesh, an open-source, JavaScript face-tracker. Face-tracking happens on the fly inside the participant’s browser. Only time-stamped vertical or horizontal lip apertures are saved to the experiment server. That is, this extension allows experiments implemented in jsPsych to collect de-identified lip kinematics from participants seated at their own home computers. After explaining the core functionality of the extension, this paper presents two validation experiments. The first establishes that utterances with different lip kinematics result in very different lip aperture trajectories, and that the timing of a key kinematic landmark agrees with the timing of acoustic landmarks obtained for the same utterances. The second experiment replicates a vowel-priming result previously demonstrated using a prior face-tracking system and saved facial video. All associated scripts have been made publicly available.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.