We consider a class of identification algorithms for distributed parameter systems. Utilizing stochastic optimization techniques, sequences of estimat.ors are constructed by minimizing appropriate functionals. The main effort is to devel<?p weak and strong invariance principles for the underlying algorithms. By means of weak convergence methods, a functional central limit theorem is established. Using the Skorohod imbedding, a strong invariance principle is obtained. These invariance principles provide very precise rates of convergence results for parameter estimates, yielding important information for experimental design. / Key wor6s: identification, distributed parameter system, stochastic optimization, invari~nce principle. 1. Introd l1ction. In a wide range of applications, various problems have been formulated by using partial differential equations appropriate boundary and initial conditions. Quite frequently, the underlying systems involve some unknown parameters, typically in the form of coefficients in the equation. As a consequence, distributed parameter identification, in which parameters are estimated from observed data, has witnessed rapid progress in recent years. To illustrate, we consider the following examples. , 1 Research of this author was supported in part by the National Science Foundation under grant DMS-9022139. 2 Research of this author was supported in part by the Air Force Office of Scientific Research under grant AFOSR-91-0021. G. Yin and B. G. Fitzpatrick 99 EXAMPLE 1. The following differential equation models fluid tlansport in cat brain tissue: Here u represents' fluid concentration, and the parameters V and 1) are c')nvectio,n and diffusion coefficients; respectively. Banks and Kareiva (1983) (sce also Banks and Fitzpatrick, 1989) used least squares techniques to fit this model to observed data, and went on to apply ANOVA-type' hypothesis tests for V = 0, in order to verify conjectures concerning the role of convection in grey and white matter. EXAMPLE 2. The determination of damping terms in flexible st.ructures is crucial to modeling and control objectives. Banks et. al. (1987) applied an Euler-Bernoulli model viscous and Kelvin-Voigt damping terms: where El is the stiffness, f is the forcing function, and cDl and 'Y are the Kelvin-Voigt and viscous damping coefficients, respectively. The functionu represents dipl Banks and Fitzpatrick (1989), Fitzpatrick (1988), Banks .and Fitzpatrick (1990), the effects of noisy observations on the class of stochastic optimization and parameter estimation procedures w~reanalyzed. In particular, consistency and asymptotic normality were established, a primary objective of developing appropriate statistics for hypothesis tests. This work complements the papers of Fitzpatrick (1988), Banks and Fitzpatrick (1990) by developing weak and strong functional invariance principles of the least squares algorithms for distributed parameter identification. Our main concerns are to investigate further the asymptotic properties and to develop rate of convergence result~. The importance of these results for applications is obvious: the a;nount of data required to achieve s~me specified estimation accu~acy would be very helpful information for designing experiments. i Functional central limit theorems and functional laws of iterated logarithms 'have played important roles in statistical estimation theory involving large samples. In (Heyde, 1981), Heyde gives an extensive survey on the usefulness and recent pmgress in these invariance theorems, which both and extend the interplay between statistical estimation and stochastic processes. The results to be presented in the sequel deal the convergence of functions constructed out of the sequence of least squares estimators (suitably scaled), and provide portmanteau forms from which other limit theorems may be obtained. A wide range oflimit distribution results involving functionals of the sequence of estimators can be inferred by employing the weak invariance principle. and the with probability one convergence rate of the algorithm G. Yin and B. G. Fitzpatrick 101 can be derived by virtue of the strong invariance theorem. These results provide us further insight on the behavior of the nonlinear least squares type of stochastic optimization and identification algorithms. The rest of. the paper is organized as follows. In the next section, we set up the notations, and summarize some previous results. Section 3 is devoted to the weak convergence issue. Under suitable conditions, we show an appropriately scaled sequence converges weakly to a Brownian motion. Exploiting this function space setting further, we derive an almost sure estimate on the error bound in Section 4. As a consequence, the functional law of iterated logarithm holds. To proceed, a brief explanation about the notations is in order. We shall use I to denote the transpose of a matrix and J{ to denote a generic positive constant; its value may change from time to time. The short hand notion w.p.P is meant to be with probability one. 2. The general least squares problem. We begin this section by setting up the least squares identifica.tion problem. Let X be a compa.ct subset of Rm , 9 : X -+ R be an unknown continuous function. Wc make a sequence of observations {li}