Whenever someone in a team tries to help others, it is crucial that they have some understanding of other team members’ goals. In modern teams, this applies equally to human and artificial (“bot”) assistants. Understanding when one does not know something is crucial for stopping the execution of inappropriate behavior and, ideally, attempting to learn more appropriate actions. From a statistical point of view, this can be translated to assessing whether none of the hypotheses in a considered set is correct. Here we investigate a novel approach for making this assessment based on monitoring the maximum a posteriori probability (MAP) of a set of candidate hypotheses as new observations arrive. Simulation studies suggest that this is a promising approach, however, we also caution that there may be cases where this is more challenging. The problem we study and the solution we propose are general, with applications well beyond human–bot teaming, including for example the scientific process of theory development.
Read full abstract