Abstract

In object classification, feature combination can usually be used to combine the strength of multiple complementary features and produce better classification results than any single one. While multiple kernel learning (MKL) is a popular approach to feature combination in object classification, it does not always perform well in practical applications. On one hand, the optimization process in MKL usually involves a huge consumption of computation and memory space. On the other hand, in some cases, MKL is found to perform no better than the baseline combination methods. This observation motivates us to investigate the underlying mechanism of feature combination with average combination and weighted average combination. As a result, we empirically find that in average combination, it is better to use a sample of the most powerful features instead of all, whereas in one type of weighted average combination, the best classification accuracy comes from a nearly sparse combination. We integrate these observations into the k-nearest neighbors (kNNs) framework, based on which we further discuss some issues related to sparse solution and MKL. Finally, by making use of the kNN framework, we present a new weighted average combination method, which is shown to perform better than MKL in both accuracy and efficiency in experiments. We believe that the work in this paper is helpful in exploring the mechanism underlying feature combination.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.