In structural equation modeling the statistician needs assumptions inorder (1) to guarantee that the estimates are consistent for the parameters of interest, and (2) to evaluate precision of the estimates and significance level of test statistics. With respect to purpose (1), the typical type of analyses (ML and WLS) are robust against violation of distributional assumptions; i.e., estimates remain consistent or any type of WLS analysis and distribution of z. (It should be noted, however, that (1) is sensitive to structural misspecification.) A typical assumption used for purpose (2), is the assumption that the vector z of observable follows a multivariate normal distribution. In relation to purpose (2), distributional misspecification may have consequences for efficiency, as well as power of test statistics (see Satorra, 1989a); that is, some estimation methods may bemore precise than others for a given specific distribution of z. For instance, ADF-WLS is asymptotically optimal under a variety of distributions of z, while the asymptotic optimality of NT-WLS may be lost when the data is non-normal Violation of a distributional assumption may have consequences for purpose (2). However, recent theory, such as the one described in Sections 7 and 8, showes that asymptotic variances of estimates and asympttic null distributions of test statistics derived under the normality assumption may be correct even when z is non-normal provided certain model conditions hold (the conditions of Theorem 1). That is, in a specific application with z non-normally distributed, the assumption that z is normal play the role of a “working device” that facilitates calculation of the correct distribution of statistics of interest. This corresponds to what in Section 7 and 8 has been called asymptotic robustness. For most of the models considered in practice, replacing the assyumption uncorrelation for the assumption of independence implised reaching the properties of asymptotic robustness; in that case, in order to evaluate the asymptotic behavior of statistics of interest, a NT form for Γ produces correct results even for non-normal data. This robustness result applies regardless of the type of fitting criterion used. Distinction between “uncorrelation’ and ‘independence’ becomes crucial when dealing with the asymptotic robustness issue. Statistical independence among variables of the model guarantee that the distribution of statistics of interest are asymptotically distribution-free of the non-normal variables; thus a NT form for Γ applies. As an example of where such distinction is apparent, consider a simple regression model with a heteroskedastic disturbance term. Here the disturbance term is uncorrelated with the regressor, but the variance varies with the value of the regressor. For a study showing that ADF-WLS protects against heteroskedasticity of erros, while ML wil generally fail, see Mooijaart and Satorra (1987). In regresion analysis the usual method for detecting heteroskedasticity is by looking at residual plots. Presumably, alsi in structural equation modeling, the need to distinguish between uncorrelation and independence will force the researcher to go back to the row data in order to do a similar type of “residuals’ inspection. In concluding, an importance consideration is to compute sampling variability for estimates and test statistics using appropriate formulae, without requiring that the estimation procedure be the ‘best’ in some sense. We have seen that such computations can be carried out correctly using the wrong assumptions with respect to the distribution of the vector of observable variables, provided some additional model conditions hold. Roughly speaking, such additional model conditions amount to strengthen the usual assumption of uncorrelation among some random constituents of the model to the assumption of stochastic independecen.