Abstract

Population learning in dynamic economies traditionally has been studied in contexts where payoff landscapes are smooth. Here, dynamic population games take place over “rugged” landscapes, where agents are uncertain about payoffs from bilateral interactions. Notably, individual payoffs from playing a binary action against everyone else are uniformly distributed over [0, 1]. This random population game leads the population to adapt over time, with agents updating both actions and partners. Agents evaluate payoffs associated to networks thanks to simple statistics of the distributions of payoffs associated to all combinations of actions performed by agents out of the interaction set. Simulations show that: (1) allowing for endogenous networks implies higher average payoff compared to static networks; (2) the statistics used to evaluate payoffs affect convergence to steady-state; and (3) for statistics MIN or MAX, the likelihood of efficient population learning strongly depends on whether agents are change-averse or not in discriminating between options delivering the same expected payoff.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call