Abstract

Few-shot classification is a classification made on the basis of very few samples, and meta-learning methods (also called “learning to learn”) are often employed to accomplish it. Research on poisoning attacks against meta-learning-based few-shot classifier has recently started to be investigated. While poisoning attacks aimed at disrupting the availability of the classifier during meta-testing have been studied in Xu et al. [1] and Oldewage et al. [2], backdoor poisoning in meta-testing has only been briefly explored by Oldewage et al. [2] under limited conditions. We formulate a backdoor poisoning attack on meta-learning-based few-shot classification in this study. We show that the proposed backdoor poisoning attack is effective against the few-shot classification using model-agnostic meta-learning (MAML) [3] through experiments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call