Abstract

Active learning is a useful tool for in-situ learning and adaptive classification systems. While traditional active learning is focused mostly on the single-sample mode, the batch mode of active learning is more interactions efficient. This paper proposes a computationally efficient approach for maximizing the joint entropy of a batch of samples and thereby attaining the maximal information gain and minimizing information redundancy. Combining with an incremental random forest, an efficient active learning algorithm is developed. The algorithm is applied to adaptive classification of underwater mines, and exhibits superior performance over the naive batch mode of active learning. Performance evaluation results for public machine learning datasets are also shown.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call