Abstract

Here 0E , *. -, 0p represent the p parameters of the model. The advent of the high-speed digital computer has made the use of any one of a number of iterative algorithms practical. The Gauss-Newton, the modified Newtons method due to Hartley, steepest decent, and the Marquardt compromise are, perhaps, the best known. These methods are discussed in detail in the references [1], [2], [3] and [4]. In each of these iterative procedures, one must first come up with a starting guess for the entire vector of parameters (0I , 02 , * * * , 0O). A correction vector is then derived and applied to this initial guess in order to produce an improved estimate of the parameter vector. This process is continued until the correction vector becomes sufficiently small. Under a suitable set of conditions, one can show that this series of corrected estimates will converge to the least squares estimates of the p parameters. That is, the parameter vector will converge to (01 , 02, ... , ,p), the vector of values which minimizes

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call