Towards a dialog strategy for handling miscommunication in human-robot dialog

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

This paper presents a first theoretical framework for a dialog strategy handling miscommunication in natural language Human-Robot Interaction (HRI). On the one hand the dialog strategy is deduced from findings about human-human communication patterns and coping strategies for miscommunication. On the other hand, relevant cognitive theories concerning human perception serve as a conceptual basis for the dialog strategy. The novel approach is firstly to combine these communication patterns with coping strategies and cognitive theories from human-human interaction (HHI) and secondly transfer them to HRI as a general dialog strategy for handling miscommunication. The presented approach is applicable to any task-oriented dialog. In a first step the conversational context is confined to route descriptions, given that asking for directions is an restricted but nevertheless challenging example for task-oriented dialog between humans and a robot.

Similar Papers
  • Book Chapter
  • 10.3233/atde250932
A Task-Oriented Multi-Turn Dialogue Method Based on a New Strategy
  • Oct 1, 2025
  • Yajie Zhu + 2 more

With the advancement of deep learning, human-computer dialogue systems have become a research hotspot. However, the context understanding ability in multi-turn dialogues is relatively weak. This paper proposes a task-oriented multi-turn dialogue method based on a new strategy. It first constructs domain classification models, intent recognition models, semantic slot filling models, dialogue state tracking, dialogue strategy selection, and dialogue responses for task-oriented dialogues, for multi-turn dialogues, the user’s text enters the intent recognition model of the previous round of dialogue and the domain classification model at the same time. If the intent recognition result of the current round and the intent recognition result of the last round of dialogue belong to the same domain, the dialogue state in the database is updated and the corresponding response is given in combination with the dialogue strategy. Otherwise, according to the domain of the current round of text classification, the task is converted into single-round dialogue and corresponding responses are given. This method improves the accuracy and performance of task-oriented multi-turn dialogues by deciding whether the current turn of dialogue should go multi-turn based on a task-oriented dialogue domain classification model and the intent recognition model from the previous turn.

  • Book Chapter
  • 10.5772/28323
User, Gesture and Robot Behaviour Adaptation for Human-Robot Interaction
  • Jan 20, 2012
  • Md. Hasanuzzaman + 1 more

Human-robot interaction has been an emerging research topic in recent year because robots are playing important roles in today’s society, from factory automation to service applications to medical care and entertainment. The goal of human-robot interaction (HRI) research is to define a general human model that could lead to principles and algorithms allowing more natural and effective interaction between humans and robots. Ueno [Ueno, 2002] proposed a concept of Symbiotic Information Systems (SIS) as well as a symbiotic robotics system as one application of SIS, where humans and robots can communicate with each other in human friendly ways using speech and gesture. A Symbiotic Information System is an information system that includes human beings as an element, blends into human daily life, and is designed on the concept of symbiosis [Ueno, 2001]. Research on SIS covers a broad area, including intelligent human-machine interaction with gesture, gaze, speech, text command, etc. The objective of SIS is to allow non-expert users, who might not even be able to operate a computer keyboard, to control robots. It is therefore necessary that these robots be equipped with natural interfaces using speech and gesture. There are several researches on human robot interaction in recent years especially focussing assistance to human. Severinson-Eklundh et. al. have developed a fetch-and-carry-robot (Cero) for motion-impaired users in the office environment [Severinson-Eklundh, 2003]. King et. al. [King, 1990] developed a ‘Helpmate robot’, which has already been deployed at numerous hospitals as a caregiver. Endres et. al. [Endres, 1998] developed a cleaning robot that has successfully been served in a supermarket during opening hours. Siegwart et. al. described the ‘Robox’ robot that worked as a tour guide during the Swiss national Exposition in 2002 [Siegwart, 2003]. Pineau et. al. described a mobile robot ‘Pearl’ that assists elderly people in daily living [Pineau, 2003]. Fong and Nourbakhsh [Fong, 2003] have summarized some applications of socially interactive robots. The use of intelligent robots encourages the view of the machine as a partner in communication rather than as a tool. In the near future, robots will interact closely with a group of humans in their everyday environment in the field of entertainment, recreation, health-care, nursing, etc. Although there is no doubt that the fusion of gesture and speech allows more natural human-robot interaction, for single modality gesture recognition can be considered more reliable than speech recognition. Human voice varies from person to person, and the system

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 14
  • 10.1177/1729881418773190
Calibrating intuitive and natural human–robot interaction and performance for power-assisted heavy object manipulation using cognition-based intelligent admittance control schemes
  • Jul 1, 2018
  • International Journal of Advanced Robotic Systems
  • S M Mizanoor Rahman + 1 more

In the first step, a one degree of freedom power assist robotic system is developed for lifting lightweight objects. Dynamics for human–robot co-manipulation is derived that includes human cognition, for example, weight perception. A novel admittance control scheme is derived using the weight perception–based dynamics. Human subjects lift a small-sized, lightweight object with the power assist robotic system. Human–robot interaction and system characteristics are analyzed. A comprehensive scheme is developed to evaluate the human–robot interaction and performance, and a constrained optimization algorithm is developed to determine the optimum human–robot interaction and performance. The results show that the inclusion of weight perception in the control helps achieve optimum human–robot interaction and performance for a set of hard constraints. In the second step, the same optimization algorithm and control scheme are used for lifting a heavy object with a multi-degree of freedom power assist robotic system. The results show that the human–robot interaction and performance for lifting the heavy object are not as good as that for lifting the lightweight object. Then, weight perception–based intelligent controls in the forms of model predictive control and vision-based variable admittance control are applied for lifting the heavy object. The results show that the intelligent controls enhance human–robot interaction and performance, help achieve optimum human–robot interaction and performance for a set of soft constraints, and produce similar human–robot interaction and performance as obtained for lifting the lightweight object. The human–robot interaction and performance for lifting the heavy object with power assist are treated as intuitive and natural because these are calibrated with those for lifting the lightweight object. The results also show that the variable admittance control outperforms the model predictive control. We also propose a method to adjust the variable admittance control for three degrees of freedom translational manipulation of heavy objects based on human intent recognition. The results are useful for developing controls of human friendly, high performance power assist robotic systems for heavy object manipulation in industries.

  • Research Article
  • Cite Count Icon 3
  • 10.1016/j.smhl.2022.100365
Touchless and nonverbal human-robot interfaces: An overview of the state-of-the-art
  • Mar 1, 2023
  • Smart Health
  • Addison Clark + 1 more

Touchless and nonverbal human-robot interfaces: An overview of the state-of-the-art

  • Conference Article
  • Cite Count Icon 7
  • 10.1109/robio.2017.8324818
Learning complex assembly skills from kinect based human robot interaction
  • Dec 1, 2017
  • Xiao Li + 3 more

Acquiring complex assembly skills is still a challenging task for robot programming. Because of the sensory and body structure differences, the human knowledge has to be demonstrated, recorded, converted and finally learned by the robot, in an inexplicit and indirect way. During this process, “how to demonstrate”, “how to convert” and “how to learn” are the key problems. In this paper, Kinect sensor is utilized to provide the behavior information of the human demonstrator. Through natural human robot interaction, body skeleton and joint 3D coordinates are provided in real-time, which can fully describe the human intension and task related skills. To overcome the structural and individual differences, a Cartesian level unified mapping method is proposed to convert the human motion and match the specified robot. The recorded data set are modeled using Gaussian mixture model(GMM) and Gaussian mixture regression(GMR), which can extract redundancies across multiple demonstrations and build robust models to regenerate the dynamics of the recorded movements. The proposed methodologies are implemented in the imNEU humanoid robot platform. Experimental results verify the effectiveness.

  • Conference Instance
  • Cite Count Icon 2
  • 10.1145/1228716
Proceedings of the ACM/IEEE international conference on Human-robot interaction
  • Mar 10, 2007

It is our great pleasure to welcome you to the 2nd ACM/IEEE International Conference on Human-Robot Interaction (HRI 2007). HRI is a highly selective annual conference that seeks to showcase the very best research and thinking in human-robot interaction. Human-robot interaction is inherently inter-disciplinary, and the conference sought papers from researchers in robotics, human-factors, ergonomics, human-computer interaction, cognitive psychology, and other fields. The mission of the conference is to create a common venue for this broad set of researchers.This year's conference theme is "Robot as Team Member". Robots are used in such critical domains as search and rescue, military theater, mine and bomb detection, scientific exploration, law enforcement, and hospital care. Such robots must coordinate their behaviors with human team members; they are more than mere tools but rather quasi-team members whose tasks have to be integrated with those of humans. HRI 2007 is dedicated to these and other issues in human and robot interaction, highlighting the importance of building core science and understanding the social and technical issues in human-robot interaction in the context of teams and groups.Of the 93 submissions, the program committee accepted 22 papers and 26 posters that cover a variety of topics, among them field studies of robots in public spaces, operator-robot rescue teams, attributions of robot behavior, and human-robot dialogue. The program includes paper presentations, a video session, two interactive poster sessions, panels on robots in teams and the future of HRI research, and keynote speeches by human teamwork expert, J. Richard Hackman of Harvard, and by Hiroshi Ishiguro of Osaka University and ATR. We hope that these proceedings will serve as a valuable reference for HRI researchers and students.

  • Conference Article
  • Cite Count Icon 6
  • 10.1109/icra.2013.6631184
Human awareness based robot performance learning in a social environment
  • May 1, 2013
  • Shih Huan Tseng + 3 more

In this paper, we develop a human awareness Decision Network model for robot performance on decision making. To accomplish more natural and intelligent human robot interaction (HRI), a robot should not only be able to infer the user's intention through recognizing the actions, but also to perform appropriate decisions and to learn from the user's feedback. In traditional approaches, user intention inference and feedback learning are dealt with separately. In this paper, we propose an integrated strategy of human-oriented perception, user modeling and user sensitivity in a social environment. The robot can analyze a user's feedback to adjust its decisions as the user expects through the strategy. The experimental results show the effectiveness of the proposed approach that enables autonomous adaptation of robot's decision to the user desires. Also, we demonstrate a satisfactory performance in terms of successful inference of human intentions, as well as adequacy of the decisions made by the robot for meeting user expectation.

  • Research Article
  • Cite Count Icon 26
  • 10.1016/j.sigpro.2017.06.001
One-shot learning based pattern transition map for action early recognition
  • Jun 2, 2017
  • Signal Processing
  • Yanli Ji + 3 more

One-shot learning based pattern transition map for action early recognition

  • Conference Article
  • Cite Count Icon 10
  • 10.1109/robot.2008.4543680
Natural hand posture recognition based on Zernike moments and hierarchical classifier
  • May 1, 2008
  • Lizhong Gu + 1 more

View-independence and user-independence are two fundamental requirements for hand posture recognition during natural human-robot interaction. However only a few research concerns on the two issues simultaneously. The difficulty for natural gesture-based human-robot interaction lies in that appearances of the same hand posture vary with different users from different viewing directions. In this paper, we propose a systematic feature selection approach based on Zernike moments and Isomap dimensionality reduction. A hierarchical classifier based on multivariate decision tree and piecewise linearization is developed to deal with the irregular distribution of the same hand postures. The proposed method is compared with other commonly used ones in hand posture recognition. Experimental results indicate that the proposed method can effectively identify different hand postures, irrespective of viewing directions and users.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 68
  • 10.1016/j.isci.2020.101993
Are friends electric? The benefits and risks of human-robot relationships.
  • Dec 26, 2020
  • iScience
  • Tony J Prescott + 1 more

SummarySocial robots that can interact and communicate with people are growing in popularity for use at home and in customer-service, education, and healthcare settings. Although growing evidence suggests that co-operative and emotionally aligned social robots could benefit users across the lifespan, controversy continues about the ethical implications of these devices and their potential harms. In this perspective, we explore this balance between benefit and risk through the lens of human-robot relationships. We review the definitions and purposes of social robots, explore their philosophical and psychological status, and relate research on human-human and human-animal relationships to the emerging literature on human-robot relationships. Advocating a relational rather than essentialist view, we consider the balance of benefits and harms that can arise from different types of relationship with social robots and conclude by considering the role of researchers in understanding the ethical and societal impacts of social robotics.

  • Book Chapter
  • 10.1007/978-3-642-17319-6_2
Natural Human-Robot Interaction
  • Jan 1, 2010
  • Takayuki Kanda

There are a lot of human-like robots developed recently. Researchers have started to explore the way to make human-like robots interacts with humans in a similar way as humans do, that is, natural human-robot interaction (HRI). We have faced the lack of knowledge on human behavior in interaction. There are existing literatures for modeling humans’ information processing in conscious, such as about language understanding, and generation of utterance and gesture. However, when we try to build a model of natural interaction for robots, we faced a difficulty from the fact that natural interaction involves a lot of humans’ unconscious information processing. Without much conscious efforts, people communicate with each other with naturally using body properties, such as gaze and gestures; but without explicit model of thorough information processing, we cannot build robots that engage in natural HRI. This talk introduces a series of studies for modeling human behavior for natural human-robot interaction. Built on existing knowledge in psychology and cognitive science, we have started to build models where observation of people’s behavior were also contributed a lot. We have modeled behaviors for robots for interacting with people as well as for behaving in an environment. Moreover, we have also explored a case where behavior in interaction is inter-related with environments.

  • Conference Article
  • Cite Count Icon 1
  • 10.1109/icip.2013.6738802
Maximally stable curvature regions for 3D hand tracking
  • Sep 1, 2013
  • Can Wang + 2 more

Fast and robust hand detections and tracking is in increasing demand from areas such as natural Human Robot interaction(HRI) and surveillance systems. Previous works always use skin color or contour model to detection hand. However, they always fail for hands always exhibits drastic appearance change due to illumination change, non-rigid nature and hands are hard to discriminate from clutter background. Actually, the hand region has a specific nature that its curvature is relatively higher than other body parts and keeps stable whatever its poses and locations are, but none of pervious works exploit this nature for hand detection. In this work, a novel algorithm MSCR (Maximally Stable Curvature Regions) based on curvature nature to detect hands. It does not require manually initialization in the first frame, the hands are located by MSCR and skin color detector in the global image. 3D optical flow integrated Kalman Filter works to estimate the next location for local detector. Extensive experiments demonstrate that robust 3D tracking of hand articulations can be achieved in real-time with accurate results.

  • Conference Article
  • 10.21427/cz90-ek55
Is Silence Golden in Human-Robot Dialogue?
  • Apr 25, 2014
  • Robert Ross

The physical actions performed by any robot can be used to convey meaning to a user in human-robot interaction. For example, successfully performing an action following a request may be viewed as an acceptance, while performing the wrong action may be construed as a mis-understanding. Even hesitating to perform a requested physical action may be viewed as a signal of non-understanding. Thus, unlike in the more orthodox domain of non-situated dialogue, natural human-robot dialogue must account for physical actions as a natural and effective implicit communication channel. Though physical actions have not always been explicitly accounted for in dialogue act annotation schemes and models, e.g., vanilla DAMSL lacks a direct mechanism for such implicit communication (Allen and Core 1997), the nature of physical actions as a type of communicative act has been long recognized within the dialogue community (see for example Coulthard & Brazil (1979) for an early account). Indeed, the physical performance of an action can be regarded as a variant on multi-modal interaction (Pfleger, Alexandersson, and Becker 2003). However, while the analysis of physical actions as communicative acts is not new, it is less clear how dialogue planning policies for human-robot interaction should be influenced by the co-occurrence of physical tasks actions. Addressing this issue successfully inevitably depends on knowing whether users consider verbal communication acts alongside physical acts to be superficial or unnatural, and on whether explicit verbal acts can be beneficial given the limitations of imperfect communication. With these questions in mind, in the following we report on a recently conducted study with an implemented humanrobot dialogue system which was designed to assess the importance of compounded physical and verbal communicative acts in human-robot dialogue.

  • Research Article
  • Cite Count Icon 10
  • 10.1109/access.2023.3259325
IRWoZ: Constructing an Industrial Robot Wizard-of-OZ Dialoguing Dataset
  • Jan 1, 2023
  • IEEE Access
  • Chen Li + 3 more

Enabling a flexible and natural human-robot interaction (HRI) for industrial robots is a critical yet challenging task that can be facilitated by the use of conversational artificial intelligence (AI). Prior research has concentrated on strengthening interactions through the deployment of social robots, while disregarding the capabilities required to boost the flexibility and user experience associated with human-robot collaboration (HRC) on manufacturing tasks. One of the main challenges is the lack of publicly available industrial-oriented dialogue datasets for the training of conversational AI. In this work, we present an Industrial Robot Wizard-of-Oz Dialoguing Dataset (IRWoZ) focused on enabling HRC in manufacturing tasks. The dataset covers four domains: assembly, transportation, position, and relocation. It is created using the Wizard-of-Oz technique to be less noisy. We manually constructed, annotated and validated dialogue segments (e.g., intentions, slots, annotations), as well as the responses. Building upon the proposed dataset, we benchmark it on the state-of-the-art (SoTA) language models, generative pre-trained (GPT-2) models, on dialogue state tracking and response generation tasks. We expect that the IRWoZ dataset will facilitate exciting ongoing dialogue research and we provide it freely accessible at <uri>https://github.com/lcroy/ToD4IR/tree/main/dataset</uri>.

  • Conference Article
  • Cite Count Icon 2
  • 10.1145/3197768.3203181
Touchless heart rate Recognition by Robots to support natural Human-Robot Communication
  • Jun 26, 2018
  • Gerald Bieber + 2 more

With the proliferation of robotic assistants, such as robot vacuum cleaners, telepresence robots, or shopping assistance robots, human-robot interaction becomes increasingly more natural. The capabilities of robots are expanding, which leads to an increasing need for a natural human-robot communication and interaction. Therefore, the modalities of text- or speech-based communication have to be extended by body language and a direct feedback such as emotion or non-verbal communication.In this paper, we present a camera-based, non-body contact optical heart rate recognition method that can be used in robots in order to identify humans' reactions during a robot-human communication or interaction. For the purpose of heart rate and heart rate variability detection, we used standard cameras (webcams) that are located inside the robot's eye. Although camera-based vital sign identification has been discussed in previous research, we noticed that certain limitations with regard to real-world applications do still exist. We identified artificial light sources as one of the main influencing factors. Therefore, we propose strategies with the aim Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from of improving natural communication between social robots and humans.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.