Abstract

To adapt linear discriminant analysis (LDA) to real world applications, there is a pressing necessity to provide it with an incremental learning ability to integrate knowledge presented by one-pass data streams, a functionality to join multiple LDA models to make the knowledge-sharing between independent learning agents more efficient, and a forgetting functionality to avoid reconstruction of the overall discriminant eigenspace caused by some irregular changes. To this end, we introduce two adaptive LDA learning methods: LDA merging and LDA splitting, which show the following merits: ability of online learning with one-pass data streams, retained class separability identical to the batch learning method, high efficiency for knowledge-sharing due to condensed knowledge representation by the eigenspace model, and more preferable time and storage cost than traditional approaches under common application conditions. These properties are validated by the experiments on a benchmark face image dataset. By the case study on application of the proposed method to multi-agent cooperative learning and system alternation of a face recognition system, we further clarified the adaptability of the proposed methods to complex dynamic learning tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call