As a schematic model of the complexity economic agents are confronted with, we introduce the “Sherrington-Kirkpatrick game,” a discrete time binary choice model inspired from mean-field spin glasses. We show that, even in a completely static environment, agents are unable to learn collectively optimal strategies. This is either because the learning process gets trapped in a suboptimal fixed point or because learning never converges and leads to a never-ending evolution of agent intentions. Contrarily to the hope that learning might save the standard “rational expectation” framework in economics, we argue that complex situations are generically and agents must do with solutions, as argued long ago by Simon []. Only a centralized, omniscient agent endowed with enormous computing power could qualify to determine the optimal strategy of all agents. Using a mix of analytical arguments and numerical simulations, we find that (i) long memory of past rewards is beneficial to learning, whereas overreaction to recent past is detrimental and leads to cycles or chaos; (ii) increased competition (nonreciprocity) destabilizes fixed points and leads first to chaos and, in the high competition limit, to quasicycles; (iii) some amount of randomness in the learning process, perhaps paradoxically, allows the system to reach better collective decisions; (iv) nonstationary, “aging” behavior spontaneously emerges in a large swath of parameter space of our complex but static world. On the positive side, we find that the learning process allows cooperative systems to coordinate around satisficing solutions with rather high (but markedly suboptimal) average reward. However, hypersensitivity to the game parameters makes it impossible to predict who will be better or worse off in our stylized economy. The statistical description of the space of satisficing solutions is an open problem. Published by the American Physical Society 2024
Read full abstract