Abstract

State-of-the-art deep learning models for food recognition do not allow data incremental learning and often suffer from catastrophic interference problems during the class incremental learning. This is an important issue in food recognition since real-world food datasets are open-ended and dynamic, involving a continuous increase in food samples and food classes. Model retraining is often carried out to cope with the dynamic nature of the data, but this demands high-end computational resources and significant time. This paper proposes a new open-ended continual learning framework by employing transfer learning on deep models for feature extraction, Relief F for feature selection, and a novel adaptive reduced class incremental kernel extreme learning machine (ARCIKELM) for classification. Transfer learning is beneficial due to the high generalization ability of deep learning features. Relief F reduces computational complexity by ranking and selecting the extracted features. The novel ARCIKELM classifier dynamically adjusts network architecture to reduce catastrophic forgetting. It addresses domain adaptation problems when new samples of the existing class arrive. To conduct comprehensive experiments, we evaluated the model against four standard food benchmarks and a recently collected Pakistani food dataset. Experimental results show that the proposed framework learns new classes incrementally with less catastrophic inference and adapts domain changes while having competitive classification performance.

Highlights

  • In the open-ended continual learning, the new images of existing classes arrive continuously, and novel classes always appear

  • This demonstrates the importance of open-ended continual learning in many real-world recognition problems, including food recognition as the dataset is dynamic, and new concepts of interest occur over time

  • Existing deep learning models for food recognition assumes that all the food classes and variations within the food classes exist initially

Read more

Summary

Introduction

In the open-ended continual learning, the new images of existing classes arrive continuously, and novel classes always appear. Two types of incremental learning are an important component in open-ended continual learning: 1) Data incremental learning 2) Class incremental learning. The data incremental learning improves the recognition performance of existing classes and adapt domain changes using newly available images. The two aspects of learning in humans, these two types of incremental learning are necessary for acquiring new concepts and improving classification performance of existing classes. This demonstrates the importance of open-ended continual learning in many real-world recognition problems, including food recognition as the dataset is dynamic, and new concepts of interest occur over time.

Objectives
Findings
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call