SUMMARY We examine a simple noniterative estimator for the parameters of a general moving average process. This non-maximum-likelihood estimator derives moving average model parameters directly from the coefficients of an approximating autoregressive model. The estimator is evaluated through asymptotic expansions and by simulation, and is also compared with maximum likelihood and the related estimator of Durbin (1959). The comparison with maximum likelihood by simulation suggests a variety of circumstances in which the simpler estimator may be appropriate despite the advantage of maximum likelihood for properly-specified low-order models. The difficulties involved in estimating a stationary autoregressive-moving average (ARMA) process with a nontrivial moving average part are well known. They include problems arising from nonlinearity, problems associated with nonuniqueness of the density function, and difficulties in deriving exact finite-sample properties. In addition, computational considerations which may be minor difficulties for a single estimation can become prominent in Monte Carlo experiments used, for example, to investigate finite-sample performance. In this paper we advance a simple estimator which may alleviate some of these difficulties, and we offer a comparison of this estimator with maximum likelihood. Most algorithms for the estimation of moving-average models are based on nonlinear least squares, or on numerical optimization of the exact or approximate likelihood function for the sample. Important contributions to the latter class include those of Osborn (1977), Godolphin (1977) and Ansley (1979). Box & Jenkins (1976) describe approximate maximum likelihood methods; see Fuller (1976, Ch. 8) for a review of nonlinear least squares methods. Hannan & Rissanen (1982) and Koreisha & Pukkila (1990) discuss estimation using a sequence of long autoregressions. These methods may suffer from a number of drawbacks. In general they require iteration, implying the potential for slow convergence of even nonconvergence. Results may be sensitive to starting values. Some methods are prone to generate noninvertible estimates or, where constrained to invertibility, estimates clustered near the boundary of the invertibility region (Ansley & Newbold, 1980). Finally, these algorithms tend to be computationally burdensome. The new, non-maximum-likelihood estimator is related to the estimator proposed by Durbin (1959). Like Durbin's, it involves approximating a moving average process by an autoregressive model, and using the pattern of autoregressive coefficients to deduce estimates of the parameters of the underlying process. The resulting estimator is simple and