Abstract

Coordination incentives (CI) like the agglomeration bonus that reward the spatial agglomeration (or other spatial patterns) of biodiversity conservation measures are gaining increasing attention. Experiments on CI, accompanied by statistical analyses, reveal insights into the behaviour of human subjects. However, the scope of statistical models is limited and one may, as in other sciences like physics or ecology, gain additional insights by fitting mechanistic process models to the experimental data. I present the first application of this type in the context of CI and fit a multi-agent simulation model to a seminal experiment on the agglomeration bonus. Comparing two basic approaches for the decision making of the model agents, reinforcement learning and using expectations about the future, reveals that the latter is much better able to replicate the observations of the experiment. Improved models of agent behaviour are indispensable in the model-based assessment of CI for the conservation of biodiversity.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.