Abstract

Coordination incentives (CI) like the agglomeration bonus that reward the spatial agglomeration (or other spatial patterns) of biodiversity conservation measures are gaining increasing attention. Experiments on CI, accompanied by statistical analyses, reveal insights into the behaviour of human subjects. However, the scope of statistical models is limited and one may, as in other sciences like physics or ecology, gain additional insights by fitting mechanistic process models to the experimental data. I present the first application of this type in the context of CI and fit a multi-agent simulation model to a seminal experiment on the agglomeration bonus. Comparing two basic approaches for the decision making of the model agents, reinforcement learning and using expectations about the future, reveals that the latter is much better able to replicate the observations of the experiment. Improved models of agent behaviour are indispensable in the model-based assessment of CI for the conservation of biodiversity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call