Abstract

In structured prediction problems, outputs are not confined to binary labels; they are often complex objects such as sequences, trees, or alignments. Support Vector Machine (SVM) methods have been successfully extended to such prediction problems. However, recent developments in large margin methods show that higher order information can be exploited for even better generalization. This article first points out a shortcoming of the SVM approach for the structured prediction; an efficient formulation is then presented to overcome the problem. The proposed algorithm exploits the fact that both the minimum and the maximum of quantities of interest are often efficiently computable even though quantities such as mean, median and variance may not be. The resulting formulation produces state-of-the-art performance on sequence learning problems. Dramatic improvements are also seen on multi-class problems.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.