Rational inattention models characterize optimal decision-making in data-rich environments. In such environments, it can be costly to look carefully at all of the information. Some information is much more salient for the decision at hand and merits closer scrutiny. The inattention decision model formalizes this choice and deduces how best to navigate through the potentially vast array of data when making decisions. In the rational formulation, the decision-maker commits fully to a subjective prior distribution over the possible states of the world that could be realized. We relax this assumption and look for a robustly optimal solution to the inattention problem by allowing the decision-maker to be ambiguity averse with respect to this prior. We feature a setup that is deliberately simple by a) assuming a discrete set of choices, b) using Shannon's mutual information to quantify attention costs, and c) imposing relative entropy with respect to a baseline probability distribution to quantify prior divergence. We provide necessary and sufficient conditions for the robust solution and develop numerical methods to solve it. In comparison to the rational solution with no prior uncertainty, our decision-maker slants priors in more cautious or pessimistic directions when deducing how to allocate attention over the range of available information. This approach implements a form of robustness to prior misspecification, or equivalently, a form of ambiguity aversion. We explore some examples that show how the robust solution differs from the rational solution with a commitment to a subjective prior distribution and how it differs from imposing risk aversion.
Read full abstract