Abstract

A human reliability analysis (HRA) is defined as “any method by which human reliability is estimated” [1][1,p. 301] and generally consist of three parts: 1) identifying possible human errors and contributors, 2) model human error, and 3) quantify human error probabilities. Many HRA methods quantify human error probabilities through the use of performance shaping factors (PSFs) that increase or decrease these probabilities. One of the factors that are often evaluated is the quality of the human machine interface (HMI) and how it affects performance. However, as evaluating HMI can be a complicated task the descriptions found in HRA methods are often not sufficient to perform the evaluation. This problem has increased lately as most current HRA methods are based on classical non-computerized control rooms creating a mismatch between the descriptions and the real world [2]. In this paper different HMI evaluation methods are discussed in terms of their usability in situations where the descriptions provided in HRA methods are not sufficient.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call