Abstract

There is a confidence crisis in many scientific disciplines, in particular disciplines researching human behavior, as many effects of original experiments have not been replicated successfully in large-scale replication studies. While human-robot interaction (HRI) is an interdisciplinary research field, the study of human behavior, cognition and emotion in HRI plays also a vital part. Are HRI user studies facing the same problems as other fields and if so, what can be done to overcome them? In this article, we first give a short overview of the replicability crisis in behavioral sciences and its causes. In a second step, we estimate the replicability of HRI user studies mainly 1) by structural comparison of HRI research processes and practices with those of other disciplines with replicability issues, 2) by systematically reviewing meta-analyses of HRI user studies to identify parameters that are known to affect replicability, and 3) by summarizing first replication studies in HRI as direct evidence. Our findings suggest that HRI user studies often exhibit the same problems that caused the replicability crisis in many behavioral sciences, such as small sample sizes, lack of theory, or missing information in reported data. In order to improve the stability of future HRI research, we propose some statistical, methodological and social reforms. This article aims to provide a basis for further discussion and a potential outline for improvements in the field.

Highlights

  • The year 2011 hit psychology hard as a row of events led to something that would later become known as the “replicability crisis” (Świątkowski and Dompnier, 2017; Romero, 2019; Wiggins and Christopherson, 2019)

  • The conclusion is clear: The crisis goes beyond replicability, it is a crisis of confidence and it is affecting many scientific disciplines primarily those that are based on the study of human behavior and rely heavily on quantitative methods (Ioannidis, 2005)

  • As other disciplines focusing on human behavior and the use of quantitative methods had such problems in replicability, this raises the question of whether quantitative HRI user studies could be affected by replication problems

Read more

Summary

Introduction

The year 2011 hit psychology hard as a row of events led to something that would later become known as the “replicability crisis” (Świątkowski and Dompnier, 2017; Romero, 2019; Wiggins and Christopherson, 2019). A significant portion of quantitative studies that tried to replicate findings of classic psychological experiments from prestigious journals had failed to find the effects that were reported in the original work. The conclusion is clear: The crisis goes beyond replicability, it is a crisis of confidence and it is affecting many scientific disciplines primarily those that are based on the study of human behavior and rely heavily on quantitative methods (Ioannidis, 2005). While HRI is a very heterogeneous research field with many different disciplines and perspectives concerning, for example, design processes, hardware and software aspects, the study of the interaction between human users and the machine systems in user studies (and a social and behavioral perspective) is a significant part of the discipline (Sheridan, 2016; Bartneck et al, 2020). As other disciplines focusing on human behavior and the use of quantitative methods had such problems in replicability, this raises the question of whether quantitative HRI user studies could be affected by replication problems

Objectives
Methods
Findings
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call