Abstract

This paper introduces a family of quasi-linear discriminants that outperform current large-margin methods in sliding window visual object detection and open set recognition tasks. In these applications, the classification problems are both numerically imbalanced - positive (object class) training and test windows are much rarer than negative (non-class) ones - and geometrically asymmetric - the positive samples typically form compact, visually-coherent groups while negatives are much more diverse, including anything at all that is not a well-centered sample from the target class. For such tasks, there is a need for discriminants whose decision regions focus on tightly circumscribing the positive class, while still taking account of negatives in zones where the two classes overlap. To this end, we propose a family of quasi-linear "polyhedral conic" discriminants whose positive regions are distorted L1 or L2 balls. In addition, we also integrated the proposed classification loss into deep neural networks so that both the features and classifier can be learned simultaneously end-to-end fashion to improve the classification accuracies. The methods have properties and run-time complexities comparable to linear Support Vector Machines (SVMs), and they can be trained from either binary or positive-only samples using constrained quadratic programs related to SVMs. Our experiments show that they significantly outperform linear SVMs, deep neural networks using softmax loss function and existing one-class discriminants on a wide range of object detection, face verification, open set recognition and conventional closed-set classification tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call