In recent developments within the domain of image classification, deep neural networks (DNNs) have attracted considerable scholarly interest and have been extensively trained using data in closed environments. Such training methodologies contrast sharply with the inherently open, progressive, and adaptive processes of the natural visual system, leading to emergent challenges. Among these, catastrophic forgetting is notable, where the network acquisition of new class information precipitates the erosion of previously established knowledge. Additionally, the network encounters the stability-plasticity dilemma, necessitating a delicate equilibrium between assimilating novel classes and retaining existing ones. To address these issues, we propose a novel incremental learning model, termed Adaptive Parameter Multiplexing (APM), which incorporates a cross-class parameter adaptive incremental strategy. Central to our methodology is the conceptualization of parameter multiplexing or incremental as a learnable optimization problem, enabling the model to autonomously evaluate and decide on the necessity for parameter adjustment throughout its training lifecycle. This framework is designed to enhance the ability of the network to extract features for new class categories effectively through incremental parameters while simultaneously employing parameter multiplexing to augment storage optimization. Our model is underpinned by a dual strategy of coarse-grained and fine-grained parameter multiplexing, guided by a learnable score that dynamically assesses the appropriateness of parameter multiplexing versus incremental updates, facilitating an optimized balance for incremental model performance and storage. In addition, we have integrated a novel regularization loss mechanism for the learnable score to optimize storage efficiency. The effectiveness of APM is empirically validated through rigorous testing on benchmark datasets, including ImageNet100, CIFAR100, CIFAR10, and CUB200. The experimental outcomes indicate that, with a trace amount of parameter increase, our model achieves significant enhancements in classification performance across both new and previously established classes, thereby surpassing existing benchmarks set by state-of-the-art algorithms in the field.
Read full abstract