gi(xl*,xn*) =0 i = 11 * p < n, (2) 9i(X1*1 .. * Xn ) -? = p + 1, *. *Xm. For an unconstrained function with continuous derivatives, a minimum occurs at that point where the partial derivatives of the function with respect to the independent variables are zero and its matrix of second derivatives is positive definite. A similar criteria can be stated for a constrained function by using Lagrange multipliers. We note that for a function with n independent variables, this approach requires the solution of at least n nonlinear equations and is, thus, not practical. The purpose of this paper is to review the existing literature on minimization techniques. While it would be impossible to include all variations, the author has endeavored to include the basic methods and some of the more significant modifications. The emphasis will be on those methods which are primarily useful when a mathematical expression for the function can be obtained. The practical difficulties associated with the experimental and on-line use of these methods will not be investigated. To avoid confusion, all methods will be discussed in terms of locating a minimum. However, these methods can be applied to finding the maximum of a function with only minor changes. A minimization procedure for a digital computer provides an algorithm by which the function is tested at a set of points. These points then provide information about the function and the location of its minimum. How the test points are chosen divides these procedures into two very general classes: sequential and nonsequential. In a sequential procedure the test points are determined by a fixed set of operations. The values of the independent variables are completely determined by the previous measurements. In the first part of this paper we shall discuss in some detail those methods which in some way depend on measurements of the slope of the function to determine the next test point. In Part II, we dis-