Abstract

AbstractBackdoor attacks, as an insidious security threat to deep neural networks (DNNs), are adept at injecting triggers into DNNs. A malicious attacker can create a link between the customized trigger and the targeted label, such that the prediction of the poisoned model will be manipulated if the input contains the predetermined trigger. However, most existing backdoor attacks define an obvious trigger (eg: conspicuous pigment block) and need to modify the poisoned images’ label, causing these images seems to be labeled incorrectly, which leads to these images can not pass human inspection. In addition, the design of the trigger always needs the information of the entire training data set, an extremely stringent experiment setting. These settings above remarkably restrict the practicality of backdoor attacks in the real world.In our paper, the proposed algorithm effectively solves these restrictions of existing backdoor attack. Our Label-Specific backdoor attack can design a unique trigger for each label, while just accessing the images of the target label. The victim model trained on our poisoned training dataset will maliciously output attacker-manipulated predictions while the poisoned model is activated by the trigger. Meanwhile victim model still maintains a good performance confronting benign data samples. Hence, our proposed backdoor attack approach must be more practical.KeywordsDeep neural networkBackdoor attackLabel-specific trigger

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.