Abstract

The interdependency analysis of human factors usually needs to identify and label factors from accident/incident reports using a binary framework. The labeling procedure involves enormous knowledge and requires a large group of raters, but the raters’ opinions usually highly contradict each other. Thus, inter-rater reliability should be implemented to improve the labeling consistency. However, the inter-rater reliability coefficient is commonly computed on a category-by-category basis, which is inefficient especially when there are many raters. To overcome this deficiency, a feedback-autonomy-based consensus model is proposed to determine the inter-rater reliability of human-factors labeling results involving a large group of raters. The proposed model can manage various non-cooperative behaviors and provide feedback adjustment suggestions to raters for the labeling procedure. Notably, the autonomy mechanism allows raters to freely choose and adjust labels referring to the reference opinions without causing over-adjustment issue. The inter-rater reliability and the interdependency results are performed on 279 accident/incident reports. We apply a commercial tool to obtain the reference results, which are consistent with the unanimous group opinions. Besides, other comparative studies clarify the advantages of the proposed consensus model. The proposed model is convenient for reducing conflicts among raters that usually exist in human-factors analysis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call