SummaryStatistical decision theory can sometimes be used to find, via a least favourable prior distribution, a statistical procedure that attains the minimax risk. This theory also provides, using an ‘unfavourable prior distribution’, a very useful lower bound on the minimax risk. In the late 1980s, Kempthorne showed how, using a least favourable prior distribution, a specified integrated risk can sometimes be minimised, subject to an inequality constraint on a different risk. Specifically, he was concerned with the solution of a minimax‐Bayes compromise problem (‘compromise decision theory’). Using an unfavourable prior distribution, Kabaila & Tuck (), provided a very useful lower bound on an integrated risk, subject to an inequality constraint on a different risk. We extend this result to the case of multiple inequality constraints on specified risk functions and integrated risks. We also describe a new and very effective method for the computation of an unfavourable prior distribution that leads to a very useful lower bound. This method is simply to maximize the lower bound directly with respect to the unfavourable prior distribution. Not only does this method result in a relatively tight lower bound, it is also fast because it avoids the repeated computation of the global maximum of a function with multiple local maxima. The advantages of this computational method are illustrated using the problems of bounding the performance of a point estimator of (i) the multivariate normal mean and (ii) the univariate normal mean.
Read full abstract