Abstract

We discuss a model of Human-AI collaboration that includes AI processing and summarizing information in a decision-making scenario to provide situation understanding to a human, who in turn would use the information to generate alternative action options and choose among those. The model raises questions about the best ways to represent uncertainty to humans for maximally robust decision-making. Two experiments investigated various representations of uncertainty, including probability without evidence information, probability with total evidence, frequency with evidence counts, beta-distribution graphs, and subjective logic triangles. The findings can guide AI developers in how best to represent situational understanding with uncertainty for human-collaborator consumption.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call