Abstract

Multiple-instance learning (MIL) is a variant of the traditional supervised learning. In MIL training examples are bags of instances and labels are associated with bags rather than individual instances. The standard MIL assumption indicates that a bag is labeled positive if at least one of its instances is labeled positive, and otherwise labeled negative. However, many MIL problems do not satisfy this assumption but the more general one that the class of a bag is jointly determined by multiple instances of the bag. To solve such problems, the authors of MILD proposed an efficient disambiguation method to identify the most discriminative instances in training bags and then converted MIL to the standard supervised learning. Nevertheless, MILD does not consider the generalization ability of its disambiguation method, leading to inferior performance compared to other baselines. In this paper, we try to improve the performance of MILD by considering the discrimination of its disambiguation method on the validation set. We have performed extensive experiments on the drug activity prediction and region-based image categorization tasks. The experimental results demonstrate that MILD outperforms other similar MIL algorithms by taking into account the generalization capability of its disambiguation method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call