Abstract
Deep learning-based methods have reached state of the art performances, relying on a large quantity of available data and computational power. Such methods still remain highly inappropriate when facing a major open machine learning problem, which consists of learning incrementally new classes and examples over time. Combining the outstanding performances of Deep Neural Networks (DNNs) with the flexibility of incremental learning techniques is a promising venue of research. In this contribution, we introduce Transfer Incremental Learning using Data Augmentation (TILDA). TILDA is based on pre-trained DNNs as feature extractors, robust selection of feature vectors in subspaces using a nearest-class-mean based technique, majority votes and data augmentation at both the training and the prediction stages. Experiments on challenging vision datasets demonstrate the ability of the proposed method for low complexity incremental learning, while achieving significantly better accuracy than existing incremental counterparts.
Highlights
Humans have the ability to incrementally learn new pieces of information through time, building over previously acquired knowledge
We introduce Transfer Incremental Learning using Data Augmentation (TILDA) that builds upon previously proposed work, attempting to cover all three criteria for efficient incremental learning
2012, in which we show the contribution of data augmentation, Nearest Class Mean classifier (NCM)-inspired classification, and subspace division on classification accuracy
Summary
Humans have the ability to incrementally learn new pieces of information through time, building over previously acquired knowledge. Learning novel data using the same set of parameters inevitably leads to the loss of the previously acquired knowledge This is why many techniques have proposed learning distinct deep learning systems over the course of time, letting another algorithm decide which one to use at prediction stage [7,8]. Such methods can quickly result in very complex systems, that are likely to fail in adversarial conditions [9]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.