Abstract

Catastrophic forgetting in neural networks during incremental learning remains a challenging problem. Previous research investigated the catastrophic forgetting in fully connected networks with some earlier work exploring activation functions and learning algorithms. Applications of neural networks have been extended to include similarity and metric learning. It is of significant interest to understand how metric learning loss functions would be affected by catastrophic forgetting. Our research investigates catastrophic forgetting for four well-known metric-based loss functions during incremental class learning. The loss functions are angular, contrastive, center, and triplet loss. Our results show that the rate of forgetting is different across loss functions on multiple datasets. Triplet loss was least affected followed by contrastive, center, and angular loss. Center and angular loss produce better embeddings on difficult tasks when trained on all available training data, however, they are the least robust to forgetting during incremental class learning. We argue that triplet loss provides the ideal middle ground for future improvements.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call