Abstract

Decision aids based on artificial intelligence and machine learning can benefit human decisions and system performance, but can also provide incorrect advice, and invite operators to inappropriately rely on automation. This paper examined the extent to which example-based explanations could improve reliance on a decision aid that is based on machine learning. Participants engaged in a preventive maintenance task by providing their diagnosis of the conditions of three components of a hydraulic system. A decision aid based on machine learning provided advice but was not always reliable. Three explanation displays (baseline, normative, normative plus contrastive) were manipulated within-participants. With the normative explanation display, we found improvements in participants' decision time and subjective workload. With the addition of contrastive explanations, we found improvements in participants' hit rate and sensitivity in discriminating between correct and incorrect ML advice. Implications for the design of explainable interfaces to support human-AI interaction in data intensive environments are discussed.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.