Abstract

Since real-world data sets usually contain large instances, it is meaningful to develop efficient and effective multiple instance learning (MIL) algorithm. As a learning paradigm, MIL is different from traditional supervised learning that handles the classification of bags comprising unlabeled instances. In this paper, a novel efficient method based on extreme learning machine (ELM) is proposed to address MIL problem. First, the most qualified instance is selected in each bag through a single hidden layer feedforward network (SLFN) whose input and output weights are both initialed randomly, and the single selected instance is used to represent every bag. Second, the modified ELM model is trained by using the selected instances to update the output weights. Experiments on several benchmark data sets and multiple instance regression data sets show that the ELM-MIL achieves good performance; moreover, it runs several times or even hundreds of times faster than other similar MIL algorithms.

Highlights

  • Multiple instance learning (MIL) was first developed to solve the problem of drug prediction [1]

  • When compared with back propagation (BP)-MIP and Diverse Density, from the point of performance as well as from the point of training time, extreme learning machine (ELM)-MIL is better than both of them. These results indicate that ELM-MIL is an efficient and effective approach on multiple instance regression task

  • A novel multiple instance learning algorithm is proposed based on extreme learning machine

Read more

Summary

Introduction

Multiple instance learning (MIL) was first developed to solve the problem of drug prediction [1]. The famous Diverse Density (DD) [13] algorithm was proposed to measure a cooccurrence of similar instances from different positive bags. Andrews et al [8] used support vector machine (SVM) to solve the MIL problem that was called MI-SVM, where a maximal margin hyperplane is chosen for the bags by regarding a margin of the most positive instance in a bag. Wang and Zucker [14] proposed two variants of the k-nearest neighbor algorithm by taking advantage of the k-neighbors at both the instance and the bag, namely, Bayesian-kNN and Citation-kNN. It is not uncommon to see that it takes a long time to train most of the multiple instance learning algorithms

Objectives
Methods
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call