Abstract

Deep neural networks with attention mechanism have led to recent success in the field of reading comprehension (RC). However, most current RC models perform unsatisfactorily on adversarial examples, that is, their effectiveness will drastically decrease when distracting sentences are inserted in contexts. Motivated by Robust Adversarial Reinforcement Learning in the field of reinforcement learning, we propose a Robust Adversarial Reinforcement Framework (RARF) to solve this problem. In this framework, we train a RC model in the presence of extra disturbances which are operated by an adversary agent. After the joint training, the adversary agent can learn an optimal destabilization adversarial policy and the robustness of the RC model can be improved. In our experiments, we test our framework integrated with the classical model and the state-of-the-art model, i.e., BiDAF and BERT. The experimental results demonstrate that the framework can effectively improve the evaluation metrics about 15-36 on four adversarial evaluations (AddSent, AddOneSent, AddAny and AddCommon).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call