Metaphors have for long time been used for describing and explaining the roles of technology in human activity. Such descriptions are embedding an increasing level of ambience, where technology is expected to sense, interpret, and adapt to an individual’s needs and wishes, while at the same time, the demands for transparency and accountability is making way for new regulations and guidelines for systems based on artificial intelligence (AI). The purpose of this research is to explore social roles of humans and AI systems, and to identify open research questions and challenges when designing for transparency and sense of control. A socio-technical relationship framework was developed for assessing the social roles of AI systems, and for designing for change in roles and relationships. The framework was developed based on activity theory, metaphors for human-technology interaction, and emergent research on human-AI collaboration. By focusing on meaningful shared activity, the situations when technology is socially and personally relevant can be distinguished from the situations where technology is functionally relevant. The identified roles are partly overlapping and fluent depending on the situation, which increases the need for transparency and accountability, and consequently, AI techniques that allows explainability, negotiation and adaptation of the enacted roles. The framework is exemplified in two case studies to elicit role transformations in a work and a home environment respectively, where an individual’s changing need for supporting development of capabilities and autonomy through AI-based technology are addressed. We identify a number of open research questions and propose to apply the framework to capture and design for developing capability in humans and AI systems, collaborative capabilities in human-AI teaming, and for eliciting the ethical and moral consequences of AI systems operating within a person’s zone of development.
Read full abstract