Abstract
Potential users of human reliability assessment techniques appear to be most sensitive to high and low predicted error probabilities. Their sensitivity to high error probabilities is heightened because there may be a need to build in more system diversity or redundancy, and their concern about low error probabilities seems to have its origin in the difficulties they experience in believing low error probability predictions. It appears that only limited attempts have so far been made to validate the predictive power of the published assessment techniques, and such attempts as have been made to correlate predicted and observed data have been confined to areas which are of minimal interest to assessors. In order to build confidence that human reliability assessment techniques will actually furnish credible predictions as a substitute for hard data, they need to be tested in task scenarios for which there are reliable data. This means that not only will more extensive testing have to be undertaken in the middle range but valid low and high probability human error data will have to be collected as a matter of some urgency. This paper reviews the validation attempts to date and suggests an outline programme of research work which needs to be performed to determine whether human reliability assessment methods can be relied upon in the absence of robust human reliability data.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.