Recent advances in automated face-tracking have made it possible for laboratory phonologists to measure lip motion using technology no more advanced than a common webcam. The present paper introduces a lip-tracking approach specifically designed for use in web-based research. The central innovation is a custom extension written for jsPsych, an existing JavaScript framework for running behavioral experiments online. This extension gathers data from the participant’s webcam and processes them through FaceMesh, an open-source, JavaScript face-tracker. Face-tracking happens on the fly inside the participant’s browser. Only time-stamped vertical or horizontal lip apertures are saved to the experiment server. That is, this extension allows experiments implemented in jsPsych to collect de-identified lip kinematics from participants seated at their own home computers. After explaining the core functionality of the extension, this paper presents two validation experiments. The first establishes that utterances with different lip kinematics result in very different lip aperture trajectories, and that the timing of a key kinematic landmark agrees with the timing of acoustic landmarks obtained for the same utterances. The second experiment replicates a vowel-priming result previously demonstrated using a prior face-tracking system and saved facial video. All associated scripts have been made publicly available.
Read full abstract