Abstract

In this paper, we review existing radius-incorporated Multiple Kernel Learning (MKL) algorithms, trying to explore the similarities and differences, and provide a deep understanding of them. Our analysis and discussion uncover that traditional margin based MKL algorithms also take an approximate radius into consideration implicitly by base kernel normalization. We perform experiments to systematically compare a number of recently developed MKL algorithms, including radius-incorporated, margin based and discriminative variants, on four MKL benchmark data sets including Protein Subcellular Localization, Protein Fold Prediction, Oxford Flower17 and Caltech101 in terms of both the classification performance, measured by classification accuracy and mean average precision. We see that overall, radius-incorporated MKL algorithms achieve significant improvement over other counterparts in terms of classification performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call