Abstract

Abstract We investigate the design of experiments for the nonparametric estimation of the root of an unknown regression function. Approaches to this problem include the Robbins-Monro (1951), Venter (1967), and Lai-Robbins (1979, 1981) stochastic approximation procedures and Wu's (1985, 1986) sequential maximum likelihood estimators. Because the regression function is not assumed to belong to a parametric family, only experimentation near to the root is informative and the sequential design should converge to the root. After the sequential design has been generated, there are at least two distinct methods for estimation of the root: (a) estimate the root by the last design point, or (b) fit a parametric model. Except for Ruppert (1988) and simulation studies by Bodt (1985) and Bodt and Tingey (1990), all studies known to us of stochastic approximation procedures used the last design point as the estimator. Wu (1985, 1986) fit a generalized linear model. When the last design point is the estimator, then clearly the design should converge to the root as rapidly as possible. Wu also used designs converging as rapidly as possible. The main new idea in this article is that, when one fits a parametric model, it is not necessary for the design to converge rapidly to the root, and there is an important advantage to slow convergence—it improves the precision with which one can estimate the derivative of the regression function at the root. This derivative must be estimated, at least implicitly, to estimate the root efficiently, and the derivative is itself an important scale measure. Using techniques from Wei (1985), it is relatively easy to establish the asymptotic distribution of the least squares estimator of the root when one fits a linear model. We establish sufficient conditions for the least squares estimate of the root to be asymptotically efficient. We also show that sequential designs satisfying these conditions can be generated by the Robbins-Monro, Lai-Robbins, and Venter stochastic approximation procedures. The latter is especially convenient, since it allows the design to converge to the root at an easily controlled rate. Besides fitting a linear model by least squares, we discuss fitting generalized linear models by a one-step approximate maximum likelihood algorithm. These results differ from those of Wu (1985, 1986) in that a much wider class of designs is considered.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.