Abstract

AbstractReal‐time prediction problems pose a challenge to machine learning algorithms because learning must be fast, the set of classes may be changing, and the relevance of some features to each class may be changing. To learn robust classifiers in such nonstationary environments, it is essential not to assign too much weight to any single feature. We address this problem by combining regularization mechanisms with online large‐margin learning algorithms. We prove bounds on their error and show that removing features with small weights has little influence on prediction accuracy, suggesting that these methods exhibit feature selection ability. We show that such regularized learning algorithms automatically decrease the influence of older training instances and focus on the more recent ones. This makes them especially attractive in dynamic environments. We evaluate our algorithms through experimental results on real data sets and through experiments with an online activity recognition system. The results show that these regularized large‐margin methods adapt more rapidly to changing distributions and achieve lower overall error rates than state‐of‐the‐art methods. Copyright © 2009 Wiley Periodicals, Inc. Statistical Analysis and Data Mining 2: 328‐345, 2009

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.