For years, the major social platforms have been criticised for failing to counteract harmful content. Their COVID-19 response demonstrates that they can do more to police their platforms, but this emergency response is not a model for future content moderation.
Last week, the major social platforms issued a joint statement affirming their commitment to "elevating authoritative content" while combating fraud and misinformation.
Facebook, the world’s largest social media platform and the parent company of Instagram and WhatsApp, outlined a series of COVID-19 actions. In addition to banning advertising that seeks to profit from the crisis, Facebook is removing “known harmful misinformation”, and promoting resources from the WHO and national health authorities. Google, YouTube, and Twitter have undertaken similar actions.
The platforms’ swift coordinated action may seem surprising given their previous reluctance to address harmful content. In recent months, Facebook refused to either fact-check or remove political ads with false claims. So, are the new measures likely to change the platforms’ long-term stance on content moderation?
COVID-19 presents an unprecedented global emergency and the actions the platforms are taking need to be understood in that context. Many of their practical and political arguments against policing content are likely to remain when this crisis is over.
The issues that fuel disinformation are often politically sensitive and inherently subjective: after all, as Stephen Coleman (2018) argues, “political truth is never neutral, objective or absolute – that’s why it’s political”. Facebook argues that it is not in a position to “referee political debates” while Twitter banned all political advertising to avoid the difficulty of trying. Beyond advertising, there are obvious technical limits to policing enormous volumes of global content with any kind of accuracy.
In contrast, it is relatively uncontroversial to intervene in a global health crisis. In some cases, the interventions are technically straightforward. For example, YouTube has placed official information cards beneath all virus-related videos. Disinformation videos that are not explicitly tagged with virus-related terms or flagged by moderators are available without the warning.
More targeted interventions are difficult at any time, but especially now that social distancing restrictions have reduced staff and made the platforms more reliant on AI to moderate content. Last week, Facebook blamed a ‘bug’ in its systems for mistakenly marking legitimate COVID-19 news stories as spam.
Moreover, with an intense focus on COVID-19 content,
platforms have warned that there are likely to be problems with content
moderation more generally. For example, YouTube anticipates an increase
in video removals:
Users and creators may see increased video removals, including some videos that may not violate policies. We won’t issue strikes on this content except in cases where we have high confidence that it’s violative. If creators think that their content was removed in error, they can appeal the decision and our teams will take a look. However, note that our workforce precautions will also result in delayed appeal reviews.
Despite the intense effort, disinformation is still getting through. YouTube videos linking COVID-19 to 5G conspiracy theories and ‘miracle cures’ (in the form of bleach) are still available and continue to be shared on other platforms. Meanwhile, Facebook has erroneously approved advertising that promotes virus conspiracies.
Facebook approved a coronavirus ad that has been running since Friday pushing the false bioweapon conspiracy theory & urges "the United States to invade Wuhan China and seize the laboratory that created the virus." pic.twitter.com/GToU2fsPTZ
— Alex Kaplan (@AlKapDC) March 23, 2020
The cornerstone of the platforms’ COVID-19 efforts is the promotion of official information sources. The old problems of monitoring content and detecting disinformation have not gone away. So, while the actions of the platforms are to be welcomed, it would be a mistake to view this as the new norm for content moderation.