Abstract

AbstractEmergent behavior in repeated collective decisions of minimally intelligent agents—who at each step in time invoke majority rule to choose between a status quo and a random challenge—can manifest through the long-term stationary probability distributions of a Markov chain. We use this known technique to compare two kinds of voting agendas: a zero-intelligence agenda that chooses the challenger uniformly at random and a minimally intelligent agenda that chooses the challenger from the union of the status quo and the set of winning challengers. We use Google Co-Lab’s GPU accelerated computing environment to compute stationary distributions for some simple examples from spatial-voting and budget-allocation scenarios. We find that the voting model using the zero-intelligence agenda converges more slowly, but in some cases to better outcomes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call