Abstract
Public policies are designed to have an impact on particular societies, yet policy-oriented computer models and simulations often focus more on articulating the policies to be applied than on realistically rendering the cultural dynamics of the target society. This approach can lead to policy assessments that ignore crucial social contextual factors. For example, by leaving out distinctive moral and normative dimensions of cultural contexts in artificial societies, estimations of downstream policy effectiveness fail to account for dynamics that are fundamental in human life and central to many public policy challenges. In this paper, we supply evidence that incorporating morally salient dimensions of a culture is critically important for producing relevant and accurate evaluations of social policy when using multi-agent artificial intelligence models and simulations.
Highlights
Testing complex policies in the real world is difficult due to ethical considerations, cost of evaluation, and challenges in generalizing test outcomes
We argue that ethics and social norms are so central to real societies that ignoring how they affect the interactions among simulated agents effectively invalidates a computational policy model
In conformity with the study of social norms within computational social science that we traced in the literature review, we argue that the role of ethical norms is so central to any society that they must be incorporated for any model of a society to be relevant for public-policy evaluation
Summary
Testing complex policies in the real world is difficult due to ethical considerations, cost of evaluation, and challenges in generalizing test outcomes It is understandable, that policy professionals would turn to computational policy modeling as an ethical and affordable way of generating cost–benefit estimates of policy proposals before they are implemented. The goal is to render one’s model as useful as possible while acknowledging the ways in which it is wrong, as well as its epistemological and hermeneutical limitations (Tolk et al 2018; Tolk 2019) In this context, our goal is not to defend the assumptions or validate the specific outcomes of the particular simulation experiments outlined below but to point out the extent to which including (or failing to include) morally salient features.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.