Abstract
Most people struggle to understand probability which is an issue for Human-Robot Interaction (HRI) researchers who need to communicate risks and uncertainties to the participants in their studies, the media and policy makers. Previous work showed that even the use of numerical values to express probabilities does not guarantee an accurate understanding by laypeople. We therefore investigate if words can be used to communicate probability, such as "likely" and "almost certainly not". We embedded these phrases in the context of the usage of autonomous vehicles. The results show that the association of phrases to percentages is not random and there is a preferred order of phrases. The association is, however, not as consistent as hoped for. Hence, it would be advisable to complement the use of words with numerical expression of uncertainty. This study provides an empirically verified list of probabilities phrases that HRI researchers can use to complement the numerical values.
Highlights
In order to model the dependency of the responses, we have considered three models in each case: (i) the naive model, where the most popular overall response category of phrase/percentage was chosen in response to the percentage/phrase prompt and no other variables were taken into account; (ii) a single classification/regression tree; and (iii) a random forest
In order to find out whether sex, age, and context had an effect on the perception of correlation between percentages and phrases, we have studied the importance of the above variables in the random forests, as well as whether or not they were included in the single best tree
We identified four decision trees, one for each combination of direction and prompt (“uncertainty” and “likelihood”)
Summary
It is inherently difficult to communicate uncertainty and the associated risks to these audiences. Reporters may ask about the application and usefulness of HRI research on a societal level. At times they may even confront HRI researchers with predictions from popular science fiction movies. Responding with statistically accurate answers to such questions is difficult This is because the researcher might not have the answers to such high-level questions, and because the audience might not be able to understand the statistics. The general public might even have an underlying fear of robots that might influence their perception [3]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.