Abstract

Continually capturing novel concepts without forgetting is one of the most critical functions sought for in artificial intelligence systems. However, even the most advanced deep learning networks are prone to quickly forgetting previously learned knowledge after training with new data. The proposed lifelong dual generative adversarial networks (LD-GANs) consist of two generative adversarial networks (GANs), namely, a Teacher and an Assistant teaching each other in tandem while successively learning a series of tasks. A single discriminator is used to decide the realism of generated images by the dual GANs. A new training algorithm, called the lifelong self knowledge distillation (LSKD) is proposed for training the LD-GAN while learning each new task during lifelong learning (LLL). LSKD enables the transfer of knowledge from one more knowledgeable player to the other jointly with learning the information from a newly given dataset, within an adversarial playing game setting. In contrast to other LLL models, LD-GANs are memory efficient and does not require freezing any parameters after learning each given task. Furthermore, we extend the LD-GANs to being the Teacher module in a Teacher-Student network for assimilating data representations across several domains during LLL. Experimental results indicate a better performance for the proposed framework in unsupervised lifelong representation learning when compared to other methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call