Abstract

This article provides an overview on planning, designing, and executing human studies for Human-Robot Interaction (HRI) that leads to ten recommendations for experimental design and study execution. Two improvements are described, using insights from the psychology and social science disciplines. First is to use large sample sizes to better represent the populations being investigated to have a higher probability of obtaining statistically significant results. Second is the application of three or more methods of evaluation to have reliable and accurate results, and convergent validity. Five primary methods of evaluation exist: self-assessments, behavioral observations, psychophysiological measures, interviews, and task performance metrics. The article describes specific tools and procedures for operationalizing these improvements, as well as suggestions for recruiting participants. A recent large-scale, complex, controlled human study in HRI using 128 participants and four methods of evaluation is presented to illustrate planning, design, and execution choices.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call