Abstract

The real-world impact of polarization and toxicity in the online sphere marked the end of 2020 and the beginning of this year in a negative way. Semeval-2021, Task 5 - Toxic Spans Detection is based on a novel annotation of a subset of the Jigsaw Unintended Bias dataset and is the first language toxicity detection task dedicated to identifying the toxicity-level spans. For this task, participants had to automatically detect character spans in short comments that render the message as toxic. Our model considers applying Virtual Adversarial Training in a semi-supervised setting during the fine-tuning process of several Transformer-based models (i.e., BERT and RoBERTa), in combination with Conditional Random Fields. Our approach leads to performance improvements and more robust models, enabling us to achieve an F1-score of 65.73% in the official submission and an F1-score of 66.13% after further tuning during post-evaluation.

Highlights

  • Nowadays, online engagement in social activities is at its highest levels

  • We describe our participation in the aforementioned Toxic Spans Detection task using several Transformer-based models (Vaswani et al, 2017), including Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al, 2019) and RoBERTa (Liu et al, 2019), with a Conditional Random Field (CRF) (Lafferty et al, 2001) layer on top to identify spans that include toxic language

  • We introduce Virtual Adversarial Training (VAT) (Miyato et al, 2015) in our training pipeline to increase the robustness of our models

Read more

Summary

Introduction

Online engagement in social activities is at its highest levels. The lockdowns during the 2020 COVID-19 pandemic increased the overall time spent online. Online toxicity is present in a large part of the social and news media platforms. The Semeval-2021 Task 5, namely Toxic Spans Detection (Pavlopoulos et al, 2021), tackles the problem of identifying the exact portion of the document that gives it toxicity. We describe our participation in the aforementioned Toxic Spans Detection task using several Transformer-based models (Vaswani et al, 2017), including BERT (Devlin et al, 2019) and RoBERTa (Liu et al, 2019), with a Conditional Random Field (CRF) (Lafferty et al, 2001) layer on top to identify spans that include toxic language. The section introduces a review of methods related to toxic language detection, sequence labeling, and adversarial training (Kurakin et al, 2016). Results are presented in the fourth section, followed by discussions, conclusions, and an outline of possible future works

Method
Corpus
Virtual Adversarial Training where
Implementation Details
Results
Discussions and Error Analysis
Conclusions and Future Work
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call