Abstract

Across common ABM frameworks (e.g., BDI, SOAR, ACT-R) errors in human perceptions are inconsistently handled through implementation or lack thereof. Without the proper integration or absence of key human behaviors, researchers in cognitive social simulations may overlook various features that would have affected the overall model trajectory. We address this concern by developing a framework that illustrates the value of agents possessing internal models that are driven by Machine Learning. We illustrate the impact of this framework on three well-known models (Schelling, Sugarscape, Axelrod) and on a COVID-19 simulation. Our work employs various Machine Learning models (e.g., Decision Tree Classifier, Logistic Regressor) to depict how the inclusion of human errors alters the overall model trajectory and may justify the integration of imperfections and heterogeneity into individual decision-making processes. Our open source framework can be integrated into existing and future models and utilized to examine the consequences of an agent making a decision without the appropriate amount of information (insufficient observation), by ignoring specific information (superficial observation), when inaccurately recording information (inaccurate perception), or due to a gap between environmental complexity and individual capacity (limited ability).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call