Special Issue on Prediction and Anticipation for Reasoning in Human-Robot Interaction
Special Issue on Prediction and Anticipation for Reasoning in Human-Robot Interaction
- Research Article
60
- 10.1016/j.isci.2020.101993
- Dec 26, 2020
- iScience
SummarySocial robots that can interact and communicate with people are growing in popularity for use at home and in customer-service, education, and healthcare settings. Although growing evidence suggests that co-operative and emotionally aligned social robots could benefit users across the lifespan, controversy continues about the ethical implications of these devices and their potential harms. In this perspective, we explore this balance between benefit and risk through the lens of human-robot relationships. We review the definitions and purposes of social robots, explore their philosophical and psychological status, and relate research on human-human and human-animal relationships to the emerging literature on human-robot relationships. Advocating a relational rather than essentialist view, we consider the balance of benefits and harms that can arise from different types of relationship with social robots and conclude by considering the role of researchers in understanding the ethical and societal impacts of social robotics.
- Research Article
4
- 10.1080/0144929x.2023.2207668
- Apr 28, 2023
- Behaviour & Information Technology
The one-on-one human–robot interaction has expanded to the group level; robot groups are increasingly exerting psychosocial implications on human beings. However, how people interact with robot groups, especially how human factors and robot group factors coordinate to influence people’s responses to robot groups, is underexplored. To investigate this issue, the present study examined the interaction effect between individual differences in fixed and growth mindsets about the human mind and the fundamental characteristics of robot groups (i.e. entitativity) on responses to the robots during human–robot interaction. We induced mindsets (fixed or growth) about the human mind and manipulated the level of robot group entitativity (high or low) to capture responses to robots during human–robot interaction using virtual reality (VR) technology. The results revealed that a growth (versus fixed) mindset about the human mind promoted self-disclosure toward, and reduced behavioural anxiety with respect to robot groups with high (versus low) entitativity. We found that increased psychological closeness with robots accounted for these effects. Our findings contribute to research on the determinants of human–robot relationships and present implications for human–robot interactions at the group level.
- Book Chapter
- 10.1007/978-981-13-8323-6_12
- Jun 16, 2019
Intelligent robots such as social robots and home service robots that are being developed nowadays are required to have the ability to naturally communicate with users in natural language. Beyond simple data retrieval and simple dialogue level, we propose a new Human-Robot Interaction (HRI) system that enables a robot to understand and reason the environment around a user and present information about them in natural language, whenever the user asks a question to the robot. For its intelligent HRI, based on Dynamic Memory Networks (DMN), a neural network for Visual Question Answering (VQA), we propose a new full sentence VQA network model called Full-Sentence Highway Memory Network (FSHMN). For its robot platform, a three DOF robotic head was used which has a neck with three motors and a tablet PC head. To verify the feasibility of the proposed system, an experiment is performed in which a user and a robot interact with each other in a way of question answering in a customized kitchen environment. Through the experiment, we not only demonstrated the effectiveness of applying deep learning to HRI applications in real environments but also presented a new insight into HRI.
- Research Article
- 10.1016/j.dib.2025.112234
- Nov 1, 2025
- Data in Brief
A multimodal dataset for human robot collaborative systems: Experimental data
- Research Article
2
- 10.1007/s13369-019-04009-z
- Jul 4, 2019
- Arabian Journal for Science and Engineering
One of the challenges of outdoor robots is developing effective portable Human-Robot-Interaction (HRI) frameworks. Hand-held devices offer a practical solution. By equipping these devices with robot software, they can be made to interact with the outdoor robots. Android devices are ideal as they are open source and can be integrated with robots powered by the Robot Operating System (ROS), also open source. However, due to the limits of rosjava, the mechanism that links ROS with android, and the conflicting modes of operation between ROS and android, current implementations of ROS-android offer limited robot applications that do not support advanced operations such as autonomous navigation and others. This paper implements selective compartmentalization to overcome these limitations, by combining ROS with android through a number of ROS and android bridges that would facilitate the development of advanced robot applications. Through the proposed method, authors were able to develop a portable HRI framework that allowed human operators to supervise an outdoor mobile robot while it performed an autonomous task. From their mobile devices, users were able to initialize the robot, configure its motion, and monitor its progress. Also, users were able to reprogram the robot to perform new tasks (not previously planned) through a creative use of features offered in the developed HRI framework. Also, user cognitive effort was reported to be low as evident by the positive score on the NASA-TLX scale test which was corroborated with robot performance data. This paper presents the detailed development and implementation steps.
- Research Article
25
- 10.1109/mra.2011.943237
- Dec 1, 2011
- IEEE Robotics & Automation Magazine
The Technical Committee (TC) on Human - Robot Interaction (HRI) is 12 years old. The next ICRA in St. Paul, Minnesota, United States, on 14?18 May 2012 will be the time for the TC triennial review. We propose that notwithstanding the many results achieved during the past years, the TC in HRI will continue to play an important role in coordinating activities related to HRI, consolidating HRI communities, and pushing research toward new and underexplored areas. We envision several possible directions for future research: 1) new research for developing smart and natural human?robot interfaces 2) increasing the interdisciplinary nature of research in HRI by enlarging the HRI community, especially by linking and coordinating activities with sister groups such as human?computer interaction or human?machine interaction 3) investigating further the social impact of HRI 4) promoting field tests on HRI 5) new knowledge on performance evaluation and benchmarking. We invite all people with an interest in HRI to send us their ideas about future directions, activities, and proposals for renewing this TC.
- Conference Instance
2
- 10.1145/3371382
- Mar 23, 2020
It is our pleasure to welcome you to the 15th Annual ACM/IEEE International Conference on Human Robot Interaction - HRI 2020. HRI is a premier, highly-selective meeting presenting the latest advances in the field, with broad participation from a range of scholars, including roboticists, social scientists, designers, engineers, and many others. HRI presents the latest advancements in technical, design, behavioural, theoretical, methodological, and metrological ideas in HRI. The theme of this year's conference is "Real World Human-Robot Interaction," reflecting on recent trends in our community toward creating and deploying systems that can facilitate real-world, long-term interaction. This theme also reflects a new theme area we have introduced at HRI this year, "Reproducibility for Human Robot Interaction," which is key to realizing this vision and helping further our scientific endeavors. This trend was also reflected across our other four theme areas, including "Human-Robot Interaction User Studies," "Technical Advances in Human-Robot Interaction," "Human-Robot Interaction Design," and "Theory and Methods in Human-Robot Interaction." The conference attracted 279 full paper submissions from around the world, including Asia, Australia, the Middle East, North America, South America, and Europe. Each submission was overseen by a dedicated theme chair and reviewed by an expert group of program committee members, who worked together with the program chairs to define and apply review criteria appropriate to each of the five contribution types. All papers were reviewed by a strict double-blind review process, followed by a rebuttal period, and shepherding if deemed appropriate by the program committee. Ultimately the committee selected 66 papers (23.6%) for presentation as full papers at the conference. As the conference is jointly sponsored by ACM and IEEE, papers are archived in the ACM Digital Library and the IEEE Xplore. Along with the full papers, the conference program and proceedings include Late Breaking Reports, Videos, Demos, a Student Design Competition, and an alt.HRI section. Out of 183 total submissions, 161 (88%) Late Breaking Reports (LBRs) were accepted and will be presented as posters at the conference. A full peer-review and meta-review process ensured that authors of LBR submissions received detailed feedback on their work. Nine short videos were accepted for presentation during a dedicated video session. The program also includes 12 demos of robot systems that participants will have an opportunity to interact with during the conference. We continue to include an alt.HRI session in this year's program, consisting of 8 papers (selected out of 43 submissions, 19%) that push the boundaries of thought and practice in the field. We are also continuing the Student Design Competition with 11 contenders, to encourage student participation in the conference and enrich the program with design inspiration and insights developed by student teams. The conference will include 6 full-day and 6 half-day workshops on a wide array of topics, in addition to the selective Pioneers Workshop for burgeoning HRI students. To accommodate the growth of our conference, this year HRI is organized as a single-track conference for 2.5 days, and dual track on the final day. Our goal is to provide sufficient time for presentations, discussions, and informal meeting and networking. It also allowed us to continue to support student design presentations, demonstrations, and sponsor talks. Over the course of four full days, we will have a rich offering of keynote sessions, oral presentations, posters, and demos. Keynote speakers will reflect the interdisciplinary nature and vigour of our community. Ayanna Howard, the Linda J. and Mark C. Smith Professor and Chair of the School of Interactive Computing at the Georgia Institute of Technology, will talk about 'Are We Trusting AI Too Much? Examining Human-Robot Interactions in the Real World', Stephanie Dinkins, a transmedia artist who creates platforms for dialog about artificial intelligence (AI) as it intersects race, gender, aging, and our future histories, and Dr Lola Canamero, Reader in Adaptive Systems and Head of the Embodied Emotion, Cognition and (Inter-)Action Lab in the School of Computer Science at the University of Hertfordshire in the UK, will talk about 'Embodied Affect for Real-World HRI'.
- Conference Article
6
- 10.1109/smc.2017.8122654
- Oct 1, 2017
One of the major functions of intelligent robots such as social or home service robots is to interact with users in natural language. Moving on from simple conversation or retrieval of data stored in computer memory, we present a new Human-Robot Interaction (HRI) system which can understand and reason over environment around the user and provide information about it in a natural language. For its intelligent interaction, we integrated Dynamic Memory Networks (DMN), a deep learning network for Visual Question Answering (VQA). For its hardware, we built a robotic head platform with a tablet PC and a 3 DOF neck. Through an experiment where the user and the robot had question answering interaction in our customized environment and in real time, the feasibility our proposed system was validated, and the effectiveness of deep learning application in real world as well as a new insight on human robot interaction was demonstrated.
- Book Chapter
6
- 10.1007/978-3-319-39513-5_8
- Jan 1, 2016
Behavioral design of robot is one of the concerns in the human-robot interaction [1, 2]. About the design of human-robot communicative interaction, there are lots of approaches have been presented for finding the preferable behaviors that are accepted by the people. In these studies, the users impressions of robots during interactions with them have been focused on the initiatives of the users, with users evaluating the response of the robot. Conversely, there have less studies on the evaluations on human impressions when a robot takes the initiative and performs active behavior towards a human. While creating events in which a robot explicitly performed active behavior, we reviewed human-robot interactions and presented our behavioral designs. Based on that, we implemented greeting functions for the robot. The objective of this study is to investigate the users’ impressions on the robot especially with the activeness of the robot. We examined the differences in their impressions depending on with or without of active behavior of robot. The results show significant differences in activity, affinity, and intentionality.
- Dissertation
1
- 10.25904/1912/4071
- Feb 2, 2021
Enhancing Humans Trust in Robots through Explanations
- Research Article
- 10.55124/ijrml.v1i1.233
- Jan 1, 2025
- International Journal of Robotics and Machine Learning Technologies
The future of AI-powered robotics promises a major transformation across various industries, offering remarkable potential to automate complex processes, boost human capabilities, and redefine numerous sectors. By utilizing machine learning, computer vision, and natural language processing, AI-driven robots can interact with their surroundings, make informed decisions, and learn from their experiences, all without direct human oversight. This fusion of robotics and AI enables machines to tackle delicate tasks such as performing surgeries in healthcare, as well as advancing manufacturing and logistics operations. In the near future, AI-powered robots are expected to become increasingly autonomous, adaptable, and proficient in managing tasks within ever-changing environments. Breakthroughs in machine perception will allow robots to better comprehend and respond to the world around them, enhancing safety and effectiveness. Furthermore, advancements in human-robot collaboration may lead to robots working alongside humans in sectors like education, hospitality, and services, improving productivity and user interactions. However, the rise of AI-powered robotics also raises important ethical, legal, and social concerns, such as job loss and privacy issues. To ensure these technologies benefit society, careful integration will be essential. Ultimately, AI-powered robots are set to play a crucial role in shaping the future, transforming how we live and work. Research significance: The future of AI-driven robotics has the potential to revolutionize various industries, including healthcare, manufacturing, logistics, and agriculture. Research in this area concentrates on improving robot autonomy, decision-making, and collaboration with humans through advanced AI technologies. As robots become smarter, they will be able to carry out intricate tasks with greater accuracy, flexibility, and efficiency, boosting productivity and safety. This research also tackles issues such as ethical concerns, reliability, and human-robot emotional interaction. Progress in AI robotics promises innovative, practical, and sustainable solutions, transforming industries and enhancing overall quality of life. Methodology: The approach to studying the future of AI-driven robotics focuses on exploring how artificial intelligence integrates with robotic systems. This involves examining progress in machine learning, computer vision, and natural language processing, which allow robots to complete intricate tasks independently. Researchers investigate technological innovations such as enhanced sensors, edge computing, and human-robot interactions. They also address ethical, societal, and economic implications, such as the effects of automation, labor markets, and safety. To forecast trends, challenges, and opportunities in AI robotics, experts use case studies, simulations, and interviews, offering a well-rounded view of its future potential Alternative: AI Algorithm Performance, Energy Efficiency, Human-Robot Interaction, Hardware Advancements, Regulation and Ethics Evaluation preference: AI Algorithm Performance, Energy Efficiency, Human-Robot Interaction, Hardware Advancements, Regulation and Ethics Results: Hardware advancements are rising to the top, while regulation and ethics are being pushed to the bottom
- Research Article
- 10.3389/conf.fnhum.2018.227.00108
- Jan 1, 2018
- Frontiers in Human Neuroscience
Perceived robot personality affects social attention in real-time human-robot interaction
- Book Chapter
- 10.5772/6997
- May 1, 2009
Interface design for Human-Robot Interaction (HRI) will soon become one of the toughest challenges that the field of robotics faces (Thrun 2004). As HRI interfaces mature it will become more common for humans and robots to work together in a collaborative manner. Although robotics is well established as a research field, there has been relatively little work on human-robot collaboration. There are many application domains that would benefit from effective human-robot collaborative interaction. For example, in space exploration, recent research has pointed out that to reduce human workload, costs, fatigue driven errors and risks, intelligent robotic systems will need to be a significant part of mission design (Fong and Nourbakhsh 2005). Fong and Nourbakhsh also observe that scant attention has been paid to joint human-robot teams, and that making human-robot collaboration natural and efficient is crucial to future space exploration. Effective human–robot collaboration will also be required for terrestrial applications such as Urban Search and Rescue (US&R) and tasks completed robotically in hazardous environments, such as removal of nuclear waste. There is a need for research on different types of HRI systems. This chapter reports on the development of the Augmented Reality Human-Robot Collaboration (AR-HRC) system (Green, Billinghurst et al. 2008). Fundamentally, this system enables humans to communicate with robotic systems in a natural manner through spoken dialog and gesture interaction, using Augmented Reality technology for visual feedback. This approach is in contrast to the typical reliance on a narrow communication link. Truly effective collaboration among any group can take place only when the participants are able to communicate in a natural and effective manner. Communicating in a natural manner for humans typically means using a combination of speech, gesture and non-verbal cues such as gaze. Grounding, the common understanding between conversational participants (Clark and Brennan 1991), shared spatial referencing and spatial awareness are well-known crucial components of communication and therefore collaboration.
- Conference Article
5
- 10.1145/3434074.3447144
- Mar 8, 2021
'Food', when mentioned in Human-Robot Interaction (HRI) research, is most often in the context of functional applications of automation, delivery, and assistance. Food has, however, not been explored as a medium for social expression or building relationships with social robots. Using web-based examples of robot food and our pilot collection of LOVOT and AIBO robot user's Tweets about their practices of feeding their robots, we show how food has the potential to sustain interactions, increase enjoyment, sociability and companionship in HRI, enhance life-likeness, autonomy, and agency for robots, and open up opportunities for community building among robot users. We present design implications of food for HRI, and urge HRI researchers to envision food as a facet of Human-Robot relationships and interaction as a celebratory, provocative, and promising domain for HRI and social robot design.
- Research Article
14
- 10.1177/1729881418773190
- Jul 1, 2018
- International Journal of Advanced Robotic Systems
In the first step, a one degree of freedom power assist robotic system is developed for lifting lightweight objects. Dynamics for human–robot co-manipulation is derived that includes human cognition, for example, weight perception. A novel admittance control scheme is derived using the weight perception–based dynamics. Human subjects lift a small-sized, lightweight object with the power assist robotic system. Human–robot interaction and system characteristics are analyzed. A comprehensive scheme is developed to evaluate the human–robot interaction and performance, and a constrained optimization algorithm is developed to determine the optimum human–robot interaction and performance. The results show that the inclusion of weight perception in the control helps achieve optimum human–robot interaction and performance for a set of hard constraints. In the second step, the same optimization algorithm and control scheme are used for lifting a heavy object with a multi-degree of freedom power assist robotic system. The results show that the human–robot interaction and performance for lifting the heavy object are not as good as that for lifting the lightweight object. Then, weight perception–based intelligent controls in the forms of model predictive control and vision-based variable admittance control are applied for lifting the heavy object. The results show that the intelligent controls enhance human–robot interaction and performance, help achieve optimum human–robot interaction and performance for a set of soft constraints, and produce similar human–robot interaction and performance as obtained for lifting the lightweight object. The human–robot interaction and performance for lifting the heavy object with power assist are treated as intuitive and natural because these are calibrated with those for lifting the lightweight object. The results also show that the variable admittance control outperforms the model predictive control. We also propose a method to adjust the variable admittance control for three degrees of freedom translational manipulation of heavy objects based on human intent recognition. The results are useful for developing controls of human friendly, high performance power assist robotic systems for heavy object manipulation in industries.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.