Abstract

User trust plays a key role in determining whether autonomous computer applications are relied upon. It will play a key role in the acceptance of emerging AI applications such as optimisation. Two important factors known to affect trust are system transparency, i.e., how well the user understands how the system works, and system performance. However, in the case of optimisation, it is difficult for the end-user to understand the underlying algorithms or to judge the quality of the solution. Through two controlled user studies, we explore whether the user is better able to calibrate their trust in the system when: (a) They are provided feedback on the system operation in the form of visualisation of intermediate solutions and their quality; (b) They can interactively explore the solution space by modifying the solution returned by the system. We found that showing intermediate solutions can lead to over-trust, while interactive exploration leads to more accurately calibrated trust.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.