Abstract

Motivated by inconsistent, underspecified, or otherwise problematic theories and usages of social agency in the HRI literature, and leveraging philosophical work on moral agency, we present a theory of social agency wherein a social agent (a thing with social agency) is any agent capable of social action at some level of abstraction. Like previous theorists, we conceptualize agency as determined by the criteria of interactivity, autonomy, and adaptability. We use the concept of face from politeness theory to define social action as any action that threatens or affirms the face of a social patient. With these definitions in mind, we specify and examine the levels of abstraction most relevant to HRI research, compare notions of social agency and the surrounding concepts at each, and suggest new conventions for discussing social agency in our field.

Highlights

  • AND MOTIVATIONThe terms “social agency” and “social agent” appear commonly within the human-robot interaction (HRI) research community

  • Pollini opines that “social agency is rooted in fantasy and imagination.”. It seems that humans’ attribution of social agency may be tied to the development of imagination during childhood, leading Pollini to argue that people can “create temporary social agents” of almost anything with which they have significant contact, including toys like dolls, tools like axes, and places like the home. This leads them to the question “what happens when such ‘entities-by-imagination’ show autonomous behavior and contingent reactions, and when they exist as social agents with their own initiative?” we argue that axes, dolls, and places cannot be social agents, at least not in the way that the typical HRI researcher means when they call a robot a social agent, since robots can conditionally take interactional behavior, which we believe is necessary for social agency

  • We argue that the distinction between these two levels of abstraction (LoAs) explains why some scholars have suggested conceptualizing and measuring “perceived moral agency” in machines as distinct from moral agency itself

Read more

Summary

A Theory of Social Agency for Human-Robot Interaction

MIRRORLab, Department of Computer Science, Colorado School of Mines, Golden, CO, United States. We use the concept of face from politeness theory to define social action as any action that threatens or affirms the face of a social patient. With these definitions in mind, we specify and examine the levels of abstraction most relevant to HRI research, compare notions of social agency and the surrounding concepts at each, and suggest new conventions for discussing social agency in our field. The Pennsylvania State University (PSU), United States Carlos A Cifuentes, Escuela Colombiana de Ingenieria Julio Garavito, Colombia

INTRODUCTION
A Theory of Social Agency
Social Agency Outside Human-Robot Interaction
Theories of Social Agency in Human-Robot Interaction
Notions of Social Agency in Human-Robot Interaction
A THEORY OF SOCIAL AGENCY FOR HUMAN-ROBOT INTERACTION
Agency and Levels of Abstraction
Social Action Grounded in Face
Social Patiency as Having Face
Social and Moral Agencies as Independent
REVISITING RELATED WORK
CONCLUDING REMARKS
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call