Abstract
We present an algorithm for calculating a $\Gamma$-minimax decision rule, when is given by a finite number of generalized moment conditions. Such a decision rule minimizes the maximum of the integrals of the risk function with respect to all distributions in $\Gamma$. The inner maximization problem is approximated by a sequence of linear programs. This approximation is combined with an elimination technique which quickly reduces the domain of the variables of the outer minimization problem. To test for convergence in a final step, the inner maximization problem has to be completely solved once for the candidate of the $\Gamma$-minimax rule found by the algorithm. For an infinite, compact parameter space, this is done by semi-infinite programming. The algorithm is applied to calculate robustified Bayesian designs in a logistic regression model and $\Gamma$-minimax tests in monotone decision problems.
Highlights
Let us consider a class of statistical decision rules for a parameter θ which varies in a σ-compact subset of a Euclidean parameter space
Assume that each decision rule can be represented by a pair k y
The risk function R k y θ of the decision rule k y, given θ, can be obtained in the usual way; it describes the average loss associated with k y, if θ is the true value of the parameter
Summary
Let us consider a class of statistical decision rules for a parameter θ which varies in a σ-compact subset of a Euclidean parameter space. Assume that each decision rule can be represented by a pair k y. The risk function R k y θ of the decision rule k y , given θ, can be obtained in the usual way; it describes the average loss associated with k y , if θ is the true value of the parameter. Let be a class of probability measures on. For each distribution π ∈ , the Bayes risk of k y with respect to π is r k y π = R k y θ dπ θ.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have