This paper examines important elements in calculating the nonlinear full-information maximum-likelihood (NLFIML) estimator which produce substantial reductions (80 percent or more) in computational cost. It examines the (i) choice of optimization algorithm, (ii) method of Hessian approximation, (iii) choice of stopping criterion, and (iv) exploitation of sparsity. We find that the Newton-Raphson algorithm employing an analytically-computed Hessian is computationally much more efficient (up to 75 percent) in this context than its oft-employed competitors, such as DFP. Additional gains (up to 30 percent) result from using a weighted-gradient stopping criterion. Exploitation of matrix sparsity adds further gains.
Read full abstract