AbstractWe study the classical ranking and selection problem, where the ultimate goal is to find the unknown best alternative in terms of the probability of correct selection or expected opportunity cost. However, this paper adopts an alternative sampling approach to achieve this goal, where sampling decisions are made with the objective of maximizing information about the unknown best alternative, or equivalently, minimizing its Shannon entropy. This adaptive learning is formulated via a Bayesian stochastic dynamic programming problem, by which several properties of the learning problem are presented, including the monotonicity of the optimal value function in an information‐seeking setting. Since the state space of the stochastic dynamic program is unbounded in the Gaussian setting, a one‐step look‐ahead approach is used to develop a policy. The proposed policy seeks to maximize the one‐step information gain about the unknown best alternative, and therefore, it is called information gradient (IG). It is also proved that the IG policy is consistent, that is, as the sampling budget grows to infinity, the IG policy finds the true best alternative almost surely. Later, a computationally efficient estimate of the proposed policy, called approximated information gradient (AIG), is introduced and in the numerical experiments its performance is tested against recent benchmarks alongside several sensitivity analyses. Results show that AIG performs competitively against other algorithms from the literature.
Read full abstract