Abstract

There are a great variety of theoretical models of cognition whose main purpose is to explain the inner workings of the human brain. Researchers from areas such as neuroscience, psychology, and physiology have proposed these models. Nevertheless, most of these models are based on empirical studies and on experiments with humans, primates, and rodents. In fields such as cognitive informatics and artificial intelligence, these cognitive models may be translated into computational implementations and incorporated into the architectures of intelligent autonomous agents (AAs). Thus, the main assumption in this work is that knowledge in those fields can be used as a design approach contributing to the development of intelligent systems capable of displaying very believable and human-like behaviors. Decision-Making (DM) is one of the most investigated and computationally implemented functions. The literature reports several computational models that enable AAs to make decisions that help achieve their personal goals and needs. However, most models disregard crucial aspects of human decision-making such as other agents' needs, ethical values, and social norms. In this paper, the authors present a set of criteria and mechanisms proposed to develop a biologically inspired computational model of Moral Decision-Making (MDM). To achieve a process of moral decision-making believable, the authors propose a cognitive function to determine the importance of each criterion based on the mood and emotional state of AAs, the main objective the model is to enable AAs to make decisions based on ethical and moral judgment.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call