Abstract
Deep neural networks, despite their remarkable success in various language understanding tasks, have been found vulnerable to adversarial attacks and subtle input perturbations, revealing a robustness shortfall. To explore this, this paper presents Robustness-Eva-MRC, an interactive platform designed to assess and analyze the robustness of pre-trained and large-scale language models in extractive machine reading comprehension (MRC) tasks. The platform integrates eight adversarial attack methods across character-, word-, and sentence-levels, and applies them to five MRC datasets, thereby fabricating challenging adversarial testing sets. Then it evaluates the MRC models on both original and adversarial sets, yielding insights into their robustness through performance gaps. Moreover, Robustness-Eva-MRC provides comprehensive visualizations and detailed case studies, enhancing the understanding of model robustness. A screencast video and additional material are available at https://github.com/distantJing/Robustness-Eva-MRC.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.