Human-machine collaboration has potentially led to higher quality and more informed data-driven decisions. However, evaluating these decisions is necessary to measure the benefits, as well as enable experiential learning and posterior rationalization of the results and consequences. Nevertheless, the multiplicity of human-machine collaboration modes, as well as the multi-faceted nature of data-driven decisions complicates evaluation, and evaluation solutions are lacking both in research and in practice. This is further reflected in the complexity of incorporating evaluation in the design of such data-driven decision making systems, since developers are left without theoretically grounded and practically feasible principles to guide implementation. In this paper, we propose a set of five design principles, explicated from theory and practice, for systems implementing data-driven decision evaluation as the output of design science research cycles. The design principles are: 1) multi-faceted evaluation criteria, 2) unified viewpoint, 3) collaborative rationality, 4) processual ex-post evaluation, and 5) adaptive feedback and learning loops. They are further contextualized in the case of AI-enabled menu design at Antell, an innovative pioneer in the restaurant business in Finland, and consequently evaluated by the development managers of the project. Accordingly, the design principles contribute to the knowledge base on metahuman systems and data-driven decision evaluation, by concretizing existing normative concepts into prescriptive knowledge, also guiding future research and generalizing towards a design theory. Furthermore, they provide implementable statements for designing and developing such systems in practice and can be used as a checklist to compare and evaluate existing systems.
Read full abstract