Abstract

Deep neural networks, despite their remarkable success in various language understanding tasks, have been found vulnerable to adversarial attacks and subtle input perturbations, revealing a robustness shortfall. To explore this, this paper presents Robustness-Eva-MRC, an interactive platform designed to assess and analyze the robustness of pre-trained and large-scale language models in extractive machine reading comprehension (MRC) tasks. The platform integrates eight adversarial attack methods across character-, word-, and sentence-levels, and applies them to five MRC datasets, thereby fabricating challenging adversarial testing sets. Then it evaluates the MRC models on both original and adversarial sets, yielding insights into their robustness through performance gaps. Moreover, Robustness-Eva-MRC provides comprehensive visualizations and detailed case studies, enhancing the understanding of model robustness. A screencast video and additional material are available at https://github.com/distantJing/Robustness-Eva-MRC.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call