Abstract

There has been growing interest in reducing the test time complexity of multi-class classification problems with large numbers of classes. The key idea to solve it is to reduce the number of classifier evaluations used to predict labels. The state-of-the-art methods usually employ the label tree approach that usually suffers the well-know error propagation problem and it is difficult for parallelization for further speedup. We propose another practical approach, with the same goal of using a small number of classifiers to achieve a good trade-off between testing efficiency and classification accuracy. The proposed method analyzes the correlation among classes, suppresses redundancy, and generates a small number of classifiers that best approximate the prediction scores of the original large number of classes. Different from label-tree methods in which each test example follows a different traversing path from the root to a leaf node and results in a different set of classifiers each time, the proposed method applies the same set of classifiers to all test examples. As a result, it is much more efficient in practice, even in the case of using the same number of classifier evaluations as the label-tree methods. Experiments on several large datasets including ILSVRC2010-1K, SUN-397, and Caltech-256 show the efficiency of our method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.