Abstract

Technical approaches for surfacing, reviewing, and removing terrorist and violent extremist content online have evolved in recent years. Larger companies have added metrics and insights in transparency reports, disclosing that most content removed for terrorist and violent extremist offences is proactive, using hybrid models deploying tooling in combination with human oversight. However, less is known about the algorithmic tools or hybrid models deployed by tech platforms to ensure greater accuracy in surfacing terrorist threats. This paper reviews existing tools deployed by platforms to counter terrorism and violent extremism online, including ethical concerns and oversight needed for algorithmic deployment, before analysing initial results from a GIFCT technical trial. The Global Internet Forum to Counter Terrorism (GIFCT) Technical Trial discussed in this paper presents the results from testing a methodology using behavioural and linguistic signals to more accurately and proactively surface terrorist and violent extremist content relating to potential real-world attacks. As governments, tech companies, and networks like GIFCT develop crisis and incident response protocols, the ability to quickly identify perpetrator content associated with attacks is crucial, whether that relates to the live streaming of an attack, or an attacker manifesto launched in parallel to the real-world violence. Building on previous academic research, while the deployment of layered signals shows promise in proactive detection and the reduction of false/positives, it also highlights the complexities of user speech, online behaviours, and cultural nuances.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call