Theories and models are central to Human Factors/Ergonomics (HFE) sciences for producing new knowledge, pushing the boundaries of the field, and providing a basis for designing systems that can improve human performance. Despite the key role, there has been less attention to what constitutes a good theory/model and how to examine the relative worth of different theories/models. This study aims to bridge this gap by (1) proposing a set of criteria for evaluating models in HFE, (2) employing a methodological approach to utilize the proposed criteria, and (3) evaluating the existing models of trust in automation (TiA) according to the proposed criteria. The resulting work provides a reference guide for researchers to examine the existing models’ performance and to make meaningful comparisons between TiA models. The results also shed light on the differences among TiA models in satisfying the criteria. While conceptual models offer valuable insights into identifying the causal factors, their limitation in operationalization poses a major challenge in terms of testability and empirical validity. On the other hand, although more readily testable and possessing higher predictive power, computational models are confined to capturing only partial causal factors and have reduced explanatory power capacity. The study concludes with recommendations that in order to advance as a scientific discipline, HFE should adopt modelling approaches that can help us understand the complexities of human performance in dynamic sociotechnical systems.
Read full abstract