Abstract

Empirical validation and verification procedures require the sophisticated development of research methodology. Therefore, researchers and practitioners in human–machine interaction and the automotive domain have developed standardized test protocols for user studies. These protocols are used to evaluate human–machine interfaces (HMI) for driver distraction or automated driving. A system or HMI is validated in regard to certain criteria that it can either pass or fail. One important aspect is the number of participants to include in the study and the respective number of potential failures concerning the pass/fail criteria of the test protocol. By applying binomial tests, the present work provides recommendations on how many participants should be included in a user study. It sheds light on the degree to which inferences from a sample with specific pass/fail ratios to a population is permitted. The calculations take into account different sample sizes and different numbers of observations within a sample that fail the criterion of interest. The analyses show that required sample sizes increase to high numbers with a rising degree of controllability that is assumed for a population. The required sample sizes for a specific controllability verification (e.g., 85%) also increase if there are observed cases of fails in regard to the safety criteria. In conclusion, the present work outlines potential sample sizes and valid inferences about populations and the number of observed failures in a user study.

Highlights

  • Level 2 driving automation technology [1] is already widely available to consumers, and the first Level 3 system obtained legal permission in March 2021 in Japan [2]

  • Compared to the evaluation of human–machine interface (HMI) in manual driving focusing on driver distraction [4,5], a commonly agreed upon methodological framework for validation and verification does not yet exist

  • The results showed that the required number of participants for populations with lower levels of controllability (e.g., 80, 85%) require a reasonable sample size of less than 50 participants, even if up to two uncontrollable events occur in the sample

Read more

Summary

Introduction

Level 2 driving automation technology [1] is already widely available to consumers, and the first Level 3 system obtained legal permission in March 2021 in Japan [2]. A vehicle’s human–machine interface (HMI) must communicate the responsibilities of the system and the driver to the latter. Compared to the evaluation of HMIs in manual driving focusing on driver distraction [4,5], a commonly agreed upon methodological framework for validation and verification does not yet exist. The Response Code of Practice [6] was first published in 2006 and was just recently been updated in the L3 Pilot project [7]. These reports combine a large body of research on driving automation systems and human–machine interfaces. Naujoks et al [8]

Objectives
Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.