Traditional robot-human interactions involve robots following human instructions, ie. ‘I-mode’. But there is an alternative mode, called ‘we-mode’ in which robots and humans collaborate. In order to collaborate with humans, working towards a shared goal, a robot needs to be socially aware, requiring the ability to process and respond to human cues. Assistant Professor Kotaro Hayashi, Department of Computer Science and Engineering, Toyohashi University of Technology, Aichi, Japan, is conducting research that seeks to advance progress towards the goal of humans and robots entering into we-mode. There are potential benefits for society, including applications in education and welfare. The focus of Hayashi’s work is on social cues and social contingency. In one project he investigated the dynamics of human-robot interaction through joint tasks via a human-eye robot he designed. He explored how participant’s response times and fixation durations changed when working with robots versus humans and confirmed the ability of robots to effectively participate in shared tasks. In another study he looked at how generative artificial intelligence (AI) impacts English language learning by using ChatGPT to develop a robot for English conversation practice. The goal was to reduce social anxiety experienced by Japanese learners of English. The research showed positive engagement, reduced anxiety and an improvement in conversation skill, which suggested the potential for AI tools to enhance language education.
Read full abstract