Abstract

ABSTRACT With further research into neural networks, their scope of application is becoming increasingly extensive. Among these, more neural network models are used in text classification tasks and have achieved excellent results. However, the crucial issue of derived adversarial examples has dramatically affected the stability and robustness of the neural network model. This issue confines the further expansion of the neural network application, especially in some security-sensitive tasks. Concerning the text classification task, our proposed DAT-LP (Defence with Adversarial Training Based on Local Perturbation) algorithm is designed to address the adversarial example issue, which uses local perturbation to enhance model performance based on adversarial training. Furthermore, SW-CStart (Cold-start Algorithm Based on Sliding Window) algorithm is designed to realise adversarial training in the model’s initialisation stage. The DAT-LP algorithm is evaluated by comparing with three baselines, including baseline models (BiLSTM, TextCNN), Dropout(regularisation method), and ADT (Adversarial Training method), respectively. As it turns out, DAT-LP’s performance is superior and demonstrates the best generalisation ability.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.