Abstract

In human-machine collaborative systems, human operators usually disuse, misuse, or abuse the assistance from machines. They distrust the systems with poor reliability and may be over-dependent on the highly reliable systems. Human's trust of the assistance of machines currently seems trapped in a dilemma. It prevents achieving harmonious human-machine collaboration. Considering the characteristics of human cognitive process, this paper suggests providing extra cognitive cues to help operators to tune their trust and decision making. The cues are expressed through variable multi-modal human-machine interfaces and serve as intelligent machines' human-like self-confidence, or confidence, in short. To test this approach, a car-following driving experiment was conducted to investigate drivers' perception of unreliable visual/auditory assistance. Experimental results showed that communication of confidence significantly ( α = 0.05) affected human drivers' cognitive processing, especially for decision-making—oriented auditory assistance. Human drivers felt the machines were more trustworthy and useful when they showed variable self-confidence. Meanwhile, a rise in trust did not significantly induce a fall in self-reliance and over-dependence on the machines. Variable self-confidence continuously reminded human drivers that the assistance was not always reliable and they should maintain sufficient awareness of their situation. Using these extra cognitive cues, human operators are more comfortable to tune their trust in real-time situations and to build better human-machine collaboration relationships.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call