Abstract

In stochastic optimization and identification problems (Ermoliev and Wets 1988; Pflug 1996), it is not always possible to find the explicit extremum for the expectation of some random function. One of the methods for solving this problem is the method of empirical means, which consists in approximation of the existing cost function by its empiric estimate, for which one can solve the corresponding optimization problem. In addition, it is obvious that many problems in mathematical statistics (for example, estimation of unknown parameters by the least squares, the least modules, the maximum likelihood methods, etc.) can be formulated as special stochastic programming problems with specific constraints for unknown parameters which stresses the close relation between stochastic programming and estimation theory methods. In such problems the distributions of random variables or processes are often unknown, but their realizations are known. Therefore, one of the approaches for solving such problems consists in replacing the unknown distributions with empiric distributions, and replacing the corresponding mathematical expectations with their empiric means. The difficulty is in finding conditions under which the approximating problem converges in some probabilistic sense to the initial one. We discussed this briefly in Sect. 2.1. Convergence conditions are of course essentially dependent on the cost function, the probabilistic properties of random observations, metric properties of the space, in which the convergence is investigated, a priori constraints on unknown parameters, etc. In the notation used in statistical decision theory the problems above are closely related with the asymptotic properties of unknown parameters estimates, i.e. their consistency, asymptotic distribution, rate of convergence, etc.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call