Scheduling is the task of assigning a set of scarce resources distributed over time to a set of agents, who typically have preferences over the assignments they would like to get. Due to the constrained nature of these problems, satisfying all agents' preferences often turns infeasible, which might lead to some agents not being happy with the resulting schedule. Providing explanations has been shown to increase satisfaction and trust in solutions produced by AI tools. However, explaining schedules poses some particular challenges such as problem interpretability (i.e., generating explanations from a huge and dense amount of information) and privacy preservation (i.e., generating explanations respecting the privacy of other agents involved). In this paper we introduce the EXPRES framework, that can explain why a given preference was unsatisfied in a given optimal schedule. The EXPRES framework consists of (i) an explanation generator, that, based on a Mixed-Integer Linear Programming model, finds the best set of reasons that can explain an unsatisfied preference; and (ii) an explanation parser, which translates the generated explanations into human interpretable ones, while preserving agents' privacy. Through simulations, we show that the explanation generator can efficiently scale to large instances. Finally, through a set of user studies within J.P. Morgan, we show that employees preferred the explanations generated by EXPRES over human-generated ones when considering workforce scheduling scenarios.
Read full abstract