Abstract

Training data creation is increasingly a key bottleneck for developing machine learning, especially for deep learning systems. Active learning provides a cost-effective means for creating training data by selecting the most informative instances for labeling. Labels in real applications are often collected from crowdsourcing, which engages online crowds for data labeling at scale. Despite the importance of using crowdsourced data in the active learning process, an analysis of how the existing active learning approaches behave over crowdsourced data is currently missing. This paper aims to fill this gap by reviewing the existing active learning approaches and then testing a set of benchmarking ones on crowdsourced datasets. We provide a comprehensive and systematic survey of the recent research on active learning in the hybrid human–machine classification setting, where crowd workers contribute labels (often noisy) to either directly classify data instances or to train machine learning models. We identify three categories of state of the art active learning methods according to whether and how predefined queries employed for data sampling, namely fixed-strategy approaches, dynamic-strategy approaches, and strategy-free approaches. We then conduct an empirical study on their cost-effectiveness, showing that the performance of the existing active learning approaches is affected by many factors in hybrid classification contexts, such as the noise level of data, label fusion technique used, and the specific characteristics of the task. Finally, we discuss challenges and identify potential directions to design active learning strategies for hybrid classification problems.

Highlights

  • Despite remarkable advances in machine learning (ML), training data remains a key bottleneck for the successful application of ML techniques

  • We observed that hybrid classification improves the performance of Active learning (AL) approaches over crowdsourced datasets

  • We report the results of an extensive experimental evaluation, providing insights on the performance of existing AL strategies in hybrid human–machine classification contexts

Read more

Summary

Introduction

Despite remarkable advances in machine learning (ML), training data remains a key bottleneck for the successful application of ML techniques. This paper reviews existing AL approaches and investigates their performance in the hybrid human–machine classification setting, where crowd workers contribute labels (often noisy) to either directly classify data instances or to train an ML model for classification. Many problems we face have a finite pool, where the set of items to classify is finite, and there is a trade-off between spending our budget or effort to train an ML model (using AL methods) versus spending that budget to directly classify items in the pool via the crowd, or using a combination of crowd and ML To run this comparison, we developed a library of AL approaches collecting implementations provided by the authors when available, and re-implementing them when we could not find existing code.

Active learning strategies: a review
Fixed‐strategy approaches
Dynamic‐strategy approaches
Strategy‐free approaches
Dealing with noisy labels
Experimental work
Problem formulation
A new approach
AL approaches and ML classifier
Crowdsourcing scenarios
Evaluation scenarios
Label fusion methods
Metrics
Datasets
Evaluation AL approach
Results
Further analysis
Conclusions and open issues
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.