The biomechanical model of the tongue developed by Payan and Perrier [Speech Commun. 22, 185–205 (1997)] is used to generate tongue movements in VCV sequences, where V is [i], [a], or [u] and C is the stop consonant [k]. Tissues’ elastic properties are accounted for in finite-element modeling, and the mechanisms underlying the production of muscle forces are modeled according to Feldman’s Equilibrium Point Hypothesis. Each elementary sound is associated with a target that is described as a static equilibrium position of the tongue, which is likely to vary with the phonemic context. In vowel production, the target is actually supposed to be reached, while the production of stop consonants consists of movements toward virtual targets located beyond the palate, which therefore cannot be reached. The collision of the tongue against the palate is modeled according to a penalty method, based on a nonlinear relationship between contact force and position/velocity of points located on the tongue surface [Marhefka and Orin, IEEE Conf. Robotics and Automation, 1662–1668 (1996)]. The acoustic signal is generated with a one-dimensional low-frequency approximation based on the Kelly–Lochbaum model. Synthetic kinematic and acoustic signals are compared with human speakers’ data. [Work supported by CNRS, NSF, and NIH.]