Abstract

Classification systems are often designed using a limited amount of data from complex and changing pattern recognition environments. In applications where new reference samples become available over time, adaptive multi-classifier systems (AMCSs) are desirable for updating class models. In this paper, an incremental learning strategy based on an aggregated dynamical niching particle swarm optimization (ADNPSO) algorithm is proposed to efficiently evolve heterogeneous classifier ensembles in response to new reference data. This strategy is applied to an AMCS where all parameters of a pool of fuzzy ARTMAP (FAM) neural network classifiers, each one corresponding to a PSO particle, are co-optimized such that both error rate and network size are minimized. To sustain a high level of accuracy while minimizing the computational complexity, the AMCS integrates information from multiple diverse classifiers, where learning is guided by the ADNPSO algorithm that optimizes networks according both these objectives. Moreover, FAM networks are evolved to maintain (1) genotype diversity of solutions around local optima in the optimization search space, and (2) phenotype diversity in the objective space. Using local Pareto optimality, networks are then stored in an archive to create a pool of base classifiers among which cost-effective ensembles are selected on the basis of accuracy, and both genotype and phenotype diversity. Performance of the ADNPSO strategy is compared against AMCSs where learning of FAM networks is guided through monoand multi-objective optimization, and assessed under different incremental learning scenarios, where new data is extracted from real-world video streams for face recognition. Simulation results indicate that the proposed strategy provides a level of accuracy that is comparable to that of using mono-objective optimization, yet requires only a fraction of its resources.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.