Abstract

Emergent behavior in repeated collective decisions of minimally intelligent agents—who at each step in time invoke majority rule to choose between a status quo and a random challenge—can manifest through the long-term stationary probability distributions of a Markov chain. We use this known technique to compare two kinds of voting agendas: a zero-intelligence agenda that chooses the challenger uniformly at random and a minimally intelligent agenda that chooses the challenger from the union of the status quo and the set of winning challengers. We use Google Co-Lab’s GPU accelerated computing environment to compute stationary distributions for some simple examples from spatial-voting and budget-allocation scenarios. We find that the voting model using the zero-intelligence agenda converges more slowly, but in some cases to better outcomes.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.