Euro 2020 final highlights a significant moderation issue with online hate speech


Words by Trudy Feenane

The fallout of England’s defeat in the UEFA Euro 2020 final has led to a torrent of racial abuse targeted at the social media accounts of some of the team’s black footballers, and has reignited the debate over how these networking platforms regulate hate speech. 

A cursory glance through the Instagram profiles of Marcus Rashford, Jadon Sancho and Bukayo Saka on Sunday night, after they failed to score penalties, left many people to question how effective these tech companies’ policies really are. 

Spokespeople for Twitter and Facebook, which owns Instagram, said the companies acted swiftly to remove the abuse on the players’ profiles, with Twitter stating that the network proactively removed more than 1,000 tweets and permanently suspended a number of accounts. 

Facebook added: “We quickly removed comments and accounts directing abuse at England’s footballers last night and we’ll continue to take action against those that break our rules.”

However, many users took to Twitter to voice concerns over racist comments that were still visible on the players’ social media accounts hours after the game. In the case of Instagram, some racist comments were not removed after users reported them, or users were informed that the comments do not breach the company’s policies. 


This became particularly relevant with the use of emojis. Floods of monkey and banana peel emojis were used aggressively under the three players’ Instagram accounts. 

Facebook’s public policies surrounding the use of emojis have appeared vague to date, with references only made to sexual solicitation. Its Hidden Words feature, launched in April, acknowledges that emojis can sometimes be offensive so users have the option to filter them from DM requests, along with other words and phrases.

Internal documents obtained by VICE in 2018 indicate how Facebook trained moderators in relation to emojis. A detailed list of emojis was provided to Facebook moderators explaining how different emojis may be used in an offensive manner. The monkey, ape and banana peel emojis are all listed under the hate speech category, however moderators are advised to “use additional context to determine if the emoji is used in a violating manner”. 

Another internal document obtained by The Guardian, this time dating to December 2020, gives further insight into the level of detail given to moderators when it comes to making judgements on emojis. Emojis listed as ‘praise/support’ in the document –  lovehearts, thumbs up and heart eyes –  are considered violations when they celebrate offensive content such as bullying. 

While this type of human moderation runs in conjunction with AI moderation for most platforms, this approach did not hold up in the hours after the match. 

This may be due to the reactive approach social media companies have taken to date, with moderators only taking action after the offensive post or comment goes live, instead of filtering them beforehand. And while it is more difficult for AI algorithms to contextualise and understand emojis, it points to a broader moderation problem these tech companies are faced with. 

Some Facebook employees have demonstrated frustration at this reactive approach. Buzzfeed Senior Tech Reporter Ryan Mac shared a thread of such internal reports, with one employee saying, “it seems this was totally preventable”. Another added: “We get this stream of utter bile every match, and it’s even worse when someone black misses…We really can’t be seen as complicit in this.”

According to The Buzzfeed reporter, Facebook opened up an incident report to investigate the abuse directed at the players, to allow the company’s policy and security teams to gauge how to react. 

Facebook’s figures on policy implementation for the last three months of 2020 shows that 97 per cent of hate speech taken down by the platform was detected proactively,  before a user could report it. The platform reportedly removed 26.9 million pieces of hate speech from Facebook and 6.6 million pieces from Instagram during the same period. 

Legislative framework 

This self-regulatory approach taken by tech companies to date has left them broadly unanswerable to any specific legislative framework that deals with harmful online content. Many countries are in the process of changing this.

In Ireland, the Online Safety and Media Regulation Bill will appoint an Online Safety Commissioner to impose strict online safety codes “to tackle the availability of harmful content”.

Social media companies will be governed by this, and non-compliance with the Bill may result in financial sanctions or the blocking of access to their site in Ireland. The code does not explicitly reference terms that relate to hate speech, such as racism or sexism, instead refers more generally to “material which is likely to have the effect of intimidating, threatening, humiliating or persecuting a person”.

The European Commission and the UK have followed suit with similar legislation, with the UK government incorporating a ‘duty of care’ element in its approach. The idea is to put the responsibility back on the tech companies to protect its users from potential harm caused, instead of penalising the companies after the harm has occurred. With the biggest soccer tournament – The World Cup – set to take place in 2022, such legislation will be essential in ensuring the intersection between football and hate speech does not find a space online. The Online Safety and Media Regulation Bill is hoped to be enacted in Ireland by the end of the year.