Abstract

Few-shot learning aims at training a model that can effectively recognize novel classes with extremely limited training examples. Few-shot learning via meta-learning can improve the performance on novel tasks by leveraging previously acquired knowledge as a prior when the training examples are extremely limited. However, most of these existing few-shot learning methods involve parameter transfer, which usually requires sharing models trained on the examples for specific tasks, thus posing a potential threat to the privacy of data owners. To tackle this, we design a novel secure collaborative few-shot learning framework. More specifically, we incorporate differential privacy into few-shot learning through adding the calibrated Gaussian noise to its optimization process to prevent sensitive information in the training set from being leaked. To prevent potential privacy disclosure to other participants and the central server, homomorphic encryption is integrated while calculating global loss functions and interacting with a central server. Furthermore, we implement our framework on the classical few-shot learning methods such as MAML and Reptile, and extensively evaluate its performance on Omniglot, Mini-ImageNet and Fewshot-CIFAR100 datasets. The experimental results demonstrate the effectiveness of our framework in both utility and privacy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.