This research introduces a new heuristic decision model called myopic-value of perfect information (VPI) to study multiarmed bandit (MAB) problems. The myopic-VPI approach only involves ranking the alternatives and computing a one-dimensional integration to obtain the expected future value of exploration. Because myopic-VPI is intuitive and does not involve solving a dynamic programming problem, it has the potential to serve as a useful heuristic approach to model exploration-exploitation tradeoffs. We conduct a series of simulation experiments to study its performance relative to other heuristics under a wide range of parameterizations. We find that myopic-VPI provides significant savings in computational time and decent performance in accumulated utility (although not the strongest) relative to other forward-looking heuristics; this suggests that it is a useful “fast-and-frugal” heuristic. Furthermore, our simulation experiments also reveal the conditions under which myopic-VPI outperforms and underperforms compared with other heuristics. Its empirical performance in the diaper category further shows that myopic-VPI can save estimation time significantly and fit the data on par with index and near-optimal, providing encouraging news that myopic-VPI could be added to the researcher’s or practitioner’s toolkit for MAB problems. This paper was accepted by Gui Liberali, marketing. Supplemental Material: The online appendices are available at https://doi.org/10.1287/mnsc.2019.00578 .
Read full abstract