Abstract

The threat of data-poisoning backdoor attacks on learning algorithms typically comes from the labeled data. However, in deep semi-supervised learning (SSL), unknown threats mainly stem from the unlabeled data. In this paper, we propose a novel deep hidden backdoor (DeHiB) attack scheme for SSL-based systems. In contrast to the conventional attacking methods, the DeHiB can inject malicious unlabeled training data to the semi-supervised learner so as to enable the SSL model to output premeditated results. In particular, a robust adversarial perturbation generator regularized by a unified objective function is proposed to generate poisoned data. To alleviate the negative impact of the trigger patterns on model accuracy and improve the attack success rate, a novel contrastive data poisoning strategy is designed. Using the proposed data poisoning scheme, one can implant the backdoor into the SSL model using the raw data without hand-crafted labels. Extensive experiments based on CIFAR10 and CIFAR100 datasets demonstrated the effectiveness and crypticity of the proposed scheme.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.