Abstract
We study whether some of the most important models of decision-making under uncertainty are uniformly learnable, in the sense of PAC (probably approximately correct) learnability. Many studies in economics rely on Savage's model of (subjective) expected utility. The expected utility model is known to predict behavior that runs counter to how many agents actually make decisions (the contradiction usually takes the form of agents' choices in the Ellsberg paradox). As a consequence, economists have developed models of choice under uncertainty that seek to generalize the basic expected utility model. The resulting models are more general and therefore more flexible, and more prone to overfitting. The purpose of our paper is to understand this added flexibility better. We focus on the classical expected utility (EU) model, and its two most important generalizations: Choquet expected utility (CEU) and Max-min Expected Utility (MEU).Our setting involves an analyst whose task is to estimate or learn an agent's preference based on data available on the agent's choices. A model of preferences is PAC learnable if the analyst can construct a learning rule to precisely learn the agent's preference with enough data. When a model is not learnable we interpret it as the model being susceptible to overfitting. PAC learnability is known to be characterized by the model's VC dimension: thus our paper takes the form of a study of the VC dimension of economic models of choice under uncertainty. We show that EU and CEU have finite VC dimension, and are consequently learnable. Morever, the sample complexity of the former is linear, and of the latter is exponential, in the number of states of uncertainty. The MEU model is learnable when there are two states but is not learnable when there are at least three states, in which case the VC dimension is infinite. Our results also exhibit a close relationship between learnability and the underlying axioms which characterise the model.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.