An analysis technique is discussed whereby speech spectra are compared with spectra generated by a computer-simulated model of articulation. In this model, a spectrum is determined by specifying as input parameters a vocal-tract area function (or a simplified configurational description) and the source spectral envelope and location. The input parameters are adjusted to obtain a “spectral match” between the internally generated spectrum and the speech spectrum, thereby specifying the articulatory parameters that correspond to the given speech spectrum. Data relating to matches from several classes of speech sounds are presented. Questions pertaining to the uniqueness and applicability of the parameters obtained by this procedure are discussed, with particular reference to descriptions of vocal-tract dynamics. The possibility of using parameters derived by this technique to control a dynamic vocal-tract analog synthesizer is also considered. [This work was supported in part by the U. S. Army, the U. S. Air Force Office of Scientific Research, and the U. S. Office of Naval Research; in part by the U. S. Air Force (Electronic Systems Division) under Contract AF 19(604)-6102; in part by the National Science Foundation (Grant G-16526); and in part by the National Institutes of Health (Grant MH-04737-02).