Abstract

Multiple instance learning is a modification in supervised learning that handles the classification of collection instances, which called bags. Each bag contains a number of instances whose features are extracted. In multiple instance learning, the standard assumption is that a positive bag contains at least one positive instance, whereas a negative bag is only comprised of negative instances. The complexity of multiple instance learning relies heavily on the number of instances in the training datasets. Since we are usually confronted with a large instance space, it is important to design efficient instance selection techniques to speed up the training process, without compromising the performance. Firstly, a multiple instance learning model of support vector machine based on grey relational analysis is proposed in this paper. The data size can be reduced, and the importance of instances in the bag can be preliminarily judged. Secondly, this paper introduces an algorithm with the bag-representative selector that trains the support vector machine based on bag-level information. Finally, this paper shows how to generalize the algorithm for binary multiple instance learning to multiple class tasks. The experimental study evaluates and compares the performance of our method against 8 state-of-the-art multiple instance methods over 10 datasets, and then demonstrates that the proposed approach is competitive with the state-of-art multiple instance learning methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.