Abstract

While the automation system brings efficiency improvements, people's trust in the automation system has become an important factor affecting the safety of the human-machine system. The operator's unsuitable trust in the automation system (such as undertrust and overtrust) makes the human-automation system not always well matched. In this paper, we took the aircraft engine fire alarm system as the research scene, carried out the human-in-the-loop simulation experiment by injecting aircraft engine fire alarms, and used the subjective report method to measure the trust level of the subject. Then, based on the experimental data, we studied the laws of human-machine trust, including the law of trust anchoring (that is, in the case of anchoring with a known false alarm rate, the subject's trust fluctuation range is smaller than that of the unknown false alarm rate), trust elasticity, and primacy effect. A human-machine trust calibration method was proposed to prevent undertrust and overtrust in the process of human-machine interaction, and different forms of calibration methods were verified. It was found that reminding the subjects when the human error probability (HEP) ≥ 0.3 and at the same time declaring whether the source of human error is overtrust or undertrust is a more effective calibration method, which can generally reduce the human error probability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call