Words by Trudy Feenane
The fallout of England’s defeat in the UEFA Euro 2020 final has led to a torrent of racial abuse targeted at the social media accounts of some of the team’s black footballers, and has reignited the debate over how these networking platforms regulate hate speech.
A cursory glance through the Instagram profiles of Marcus Rashford, Jadon Sancho and Bukayo Saka on Sunday night, after they failed to score penalties, left many people to question how effective these tech companies’ policies really are.
Spokespeople for Twitter and Facebook, which owns Instagram, said the companies acted swiftly to remove the abuse on the players’ profiles, with Twitter stating that the network proactively removed more than 1,000 tweets and permanently suspended a number of accounts.
Facebook added: “We quickly removed comments and accounts directing abuse at England’s footballers last night and we’ll continue to take action against those that break our rules.”
However, many users took to Twitter to voice concerns over racist comments that were still visible on the players’ social media accounts hours after the game. In the case of Instagram, some racist comments were not removed after users reported them, or users were informed that the comments do not breach the company’s policies.
The state of @instagram. I reported a comment on Saka’s insta calling him the N word followed by monkey emoji’s (along many others I found) and this is the response to them all! No wonder so many do it!? @kickitout @SkySportsNews @England pic.twitter.com/uOtJSrhBfP
— Christopher West (@ChrisWest1610) July 12, 2021
Emojis
This became particularly relevant with the use of emojis. Floods of monkey and banana peel emojis were used aggressively under the three players’ Instagram accounts.
Facebook’s public policies surrounding the use of emojis have appeared vague to date, with references only made to sexual solicitation. Its Hidden Words feature, launched in April, acknowledges that emojis can sometimes be offensive so users have the option to filter them from DM requests, along with other words and phrases.
Internal documents obtained by VICE in 2018 indicate how Facebook trained moderators in relation to emojis. A detailed list of emojis was provided to Facebook moderators explaining how different emojis may be used in an offensive manner. The monkey, ape and banana peel emojis are all listed under the hate speech category, however moderators are advised to “use additional context to determine if the emoji is used in a violating manner”.
Another internal document obtained by The Guardian, this time dating to December 2020, gives further insight into the level of detail given to moderators when it comes to making judgements on emojis. Emojis listed as ‘praise/support’ in the document - lovehearts, thumbs up and heart eyes - are considered violations when they celebrate offensive content such as bullying.
While this type of human moderation runs in conjunction with AI moderation for most platforms, this approach did not hold up in the hours after the match.
This may be due to the reactive approach social media companies have taken to date, with moderators only taking action after the offensive post or comment goes live, instead of filtering them beforehand. And while it is more difficult for AI algorithms to contextualise and understand emojis, it points to a broader moderation problem these tech companies are faced with.
Some Facebook employees have demonstrated frustration at this reactive approach. Buzzfeed Senior Tech Reporter Ryan Mac shared a thread of such internal reports, with one employee saying, “it seems this was totally preventable”. Another added: “We get this stream of utter bile every match, and it’s even worse when someone black misses…We really can’t be seen as complicit in this.”
According to The Buzzfeed reporter, Facebook opened up an incident report to investigate the abuse directed at the players, to allow the company’s policy and security teams to gauge how to react.
Facebook’s figures on policy implementation for the last three months of 2020 shows that 97 per cent of hate speech taken down by the platform was detected proactively, before a user could report it. The platform reportedly removed 26.9 million pieces of hate speech from Facebook and 6.6 million pieces from Instagram during the same period.
Within Facebook there are multiple threads of employees who are frustrated that FB is not doing enough to stop racist abuse on the accounts of English players Bukayo Saka & Marcus Rashford. They have been flagging comments for 12+ hours. One employee: “We MUST act faster here.”
— Ryan Mac