Abstract

In this article, we introduce a next-generation annotation tool called <sc xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">NOVA</small> for emotional behaviour analysis, which implements a workflow that interactively incorporates the ‘human in the loop’. A main aspect of NOVA is the possibility of applying semi-supervised active learning where Machine Learning techniques are used already during the annotation process by giving the possibility to pre-label data automatically. Furthermore, NOVA implements recent eXplainable AI (XAI) techniques to provide users with both, a confidence value of the automatically predicted annotations, as well as visual explanations. We investigate how such techniques can assist non-experts in terms of trust, perceived self-efficacy, cognitive workload as well as creating correct mental models about the system by conducting a user study with 53 participants. The results show that NOVA can easily be used by non-experts and lead to a high computer self-efficacy. Furthermore, the results indicate that XAI visualisations help users to create more correct mental models about the machine learning system compared to the baseline condition. Nevertheless, we suggest that explanations in the field of AI have to be more focused on user-needs as well as on the classification task and the model they want to explain.

Highlights

  • I N this article we propose a framework that allows nonMachine Learning experts to employ AI techniques to their problem domain

  • Besides intuition, we provide so called eXplainable AI (XAI) algorithms within the workflow that allow users to generate local posthoc explanations on instances their model predicted. This way we combine interactive machine learning techniques and explainable AI algorithms to involve the human in the machine learning process, while at the same time giving back control and transparency to users

  • We showed that interactive machine learning applications like NOVA are helpful for tasks that involve non-experts in the process

Read more

Summary

Introduction

I N this article we propose a framework that allows nonMachine Learning experts to employ AI techniques to their problem domain. More precisely we introduce a tool named NOVA that supports interdisciplinary researchers and end-users during the annotation process of continuous multi-modal data by incorporating Machine Learning techniques that are applied already during the annotation process This way, users are enabled to interactively enhance their Machine Learning model by incrementally adding new data to the training set, while at the same time they get a better understanding of the capabilities of their model. Besides intuition, we provide so called eXplainable AI (XAI) algorithms within the workflow that allow users to generate local posthoc explanations on instances their model predicted This way we combine interactive machine learning techniques and explainable AI algorithms to involve the human in the machine learning process, while at the same time giving back control and transparency to users. With this study we want to examine the following research questions: 1) How do people with little or no machine learning experience rate the interaction with the NOVA software?

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call