Abstract

Most pool-based active learning studies have focused on query strategy for active learning. In this paper, via empirical analysis on the effect of passive learning before starting active learning, we reveal that the amount of data acquired by passive learning significantly affects the performance of active learning algorithms. In addition, we confirm that the best amount of data that should be acquired by passive learning depends on the given settings: network complexity, query strategy, and datasets. Inspired by these observations, we propose a method to automatically determine the starting point of active learning for the given settings. To this end, we suggest entropy of sample-uncertainty to measure the training degree of a target model and develop three empirical formulas to determine an appropriate entropy of sample-uncertainty that should be obtained by passive learning before starting active learning. The effectiveness of the proposed method is validated by extensive experiments on popular image classification benchmarks and query strategies.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.