Abstract

An autonomous actor should decide on its own which goals and strategies to pursue in a new situation involving multiple actors. Humans in such cases typically rely on social factors, such as individual relationships and ethical background. An artificial autonomous agent in such cases can be more useful and efficient as an actor in a human team, if its behavior is believable, i.e., similar to the naturally motivated human behavior. This similarity can be achieved in a cognitive architecture through the attribution of characters to actors and human-like reasoning in terms of ethical norms and moral schemas applied to developing individual relationships among characters. Whether the actor's behavior is sufficiently human-like and human-compatible in this sense, can be judged based on a Turing-like test that is described and analyzed here in simplistic videogame settings. The challenge for an artificial actor is to be preferred, over its human rival, as a trustworthy partner of the human participant. Additional metrics include behavioral characteristics derived from the study of cognitive architecture eBICA. The paradigm extends to other settings as well, and can be useful for evaluation of cognitive architectures that support near-human-level social emotionality.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call