Abstract

In multi-instance nonparallel plane learning (NPL), the training set is comprised of bags of instances and the nonparallel planes are trained to classify the bags. Most of the existing multi-instance NPL methods are proposed based on a twin support vector machine (TWSVM). Similar to TWSVM, they use only a single plane to generalize the data occurrence of one class and do not sufficiently consider the boundary information, which may lead to the limitation of their classification accuracy. In this article, we propose a multi-instance nonparallel tube learning (MINTL) method. Distinguished from the existing multi-instance NPL methods, MINTL embeds the boundary information into the classifier by learning a large-margin-based ϵ -tube for each class, such that the boundary information can be incorporated into refining the classifier and further improving the performance. Specifically, given a K -class multi-instance dataset, MINTL seeks K ϵ -tubes, one for each class. In multi-instance learning, each positive bag contains at least one positive instance. To build up the ϵk -tube of class k , we require that each bag of class k should have at least one instance included in the ϵk -tube. Moreover, except for one instance included in the ϵk -tube, the remaining instances in the positive bag may include positive instances or irrelevant instances, and their labels are unavailable. A large margin constraint is presented to assign the remaining instances either inside the ϵk -tube or outside the ϵk -tube with a large margin. Substantial experiments on real-world datasets have shown that MINTL obtains significantly better classification accuracy than the existing multi-instance NPL methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.