Abstract

Machine Teaching (MT) is an emerging practice where people, without Machine Learning (ML) expertise, provide rich information beyond labels in order to create ML models. MT promises to lower the barrier of entry to creating ML models by requiring a softer set of skills from users than having ML expertise. In this paper, we explore and show how end-users without MT experience successfully build ML models using the MT process, and achieve results not far behind those of MT experts. We do this by conducting two studies. We first investigated how MT experts build models, from which we extracted expert teaching patterns. In our second study, we observed end-users without MT experience create ML models with and without guidance from expert patterns. We found that all users built models comparable to those built by MT experts. Further, we observed that users who received guidance perceived the task to require less effort and felt less mental demand than those who did not receive guidance.

Highlights

  • IntroductionThe Machine Learning (ML) field has devoted its attention to the study of algorithms that extract knowledge from data

  • Over the past decades, the Machine Learning (ML) field has devoted its attention to the study of algorithms that extract knowledge from data

  • We focus on exploring and showing how the Machine Teaching (MT) process enables end-users (MT and ML novices) to construct ML models comparable to those built by MT experts through two studies in which participants construct binary classification models of text documents

Read more

Summary

Introduction

The Machine Learning (ML) field has devoted its attention to the study of algorithms that extract knowledge from data. It is common to hear today about solutions where machines can make predictions with almost, or better-than, human precision This success comes at a cost: building these ML models requires an expert model builder with knowledge of the underlying learning algorithm. Our work takes place in the context of a supervised iML workflow where people build ML models by providing knowledge in an interactive loop [1, 12] In this flow, it is worth noting three main activities where people can actively express knowledge: choosing what to label (sampling), labeling, and featuring. Our work looks at the iML loop holistically where the above three activities occur in concert, and with a deeper emphasis on human interaction We do this within the framing of machine teaching, as defined by [34], a point of view we will expand

Objectives
Findings
Methods
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call