Twitter is piloting a new initiative, called Birdwatch, to crowdsource the identification of misleading tweets. Running in the US only, the crowdsourced evaluations will be visible on the Birdwatch site and will not influence the content people see on Twitter.
Twitter explained it this way:
Birdwatch allows people to identify information in Tweets they believe is misleading and write notes that provide informative context. We believe this approach has the potential to respond quickly when misleading information spreads, adding context that people trust and find valuable. Eventually we aim to make notes visible directly on Tweets for the global Twitter audience, when there is consensus from a broad and diverse set of contributors.Twitter blog
Crowdsourced evaluations have great potential to address false information at speed and scale. Projects like WeVerify already use crowdsourcing to help fact-checkers evaluate claims quickly by assessing how groups of users have responded to those claims. Studies have shown that this approach can be used to identify emerging misinformation. A recent pre-print study found that the content evaluations of a politically-balanced crowd of ten laypeople matched those of professional fact-checkers.
However, the composition of the crowd is important to avoid partisan evaluations or the hi-jacking of the process. The success of Birdwatch is likely to depend on Twitter’s ability to attract and manage a balanced pool of participants.
Moderating content at scale
Last year, Twitter began labelling misinformation about Covid-19 and the US election, but the ability to detect problem content at speed and scale remains a key challenge. Currently, platforms rely on a combination of human moderators and algorithms.
Algorithms are used to detect hate speech, child pornography, copyright infringements, and terrorist propaganda. However, without human intelligence to review algorithmic judgments, there are significant risks of over-zealous and error-prone moderation. This was evident during the Covid-19 crisis when social-distancing measures restricted the availability of human moderators. As a result, the platforms acknowledged that groundless content removals were likely to increase. Early in the pandemic, for example, Twitter erroneously flagged tweets containing the words ‘oxygen’ and ‘frequency’ as conspiracy theories.
Meanwhile, human oversight for content moderation is often outsourced to poorly resourced contractors who are ill-equipped to evaluate the volume and diversity of content circulating on platforms. Crowdsourcing has clear benefits when set against these practices.
The pilot is also interesting because it increases civic participation in content moderation. Currently, users are largely limited to reporting content and blocking accounts.
Some platforms have attempted to increase oversight through the appointment of chief ethics officers and oversight boards. But these are often conceived in legal terms. Mark Zuckerberg described Facebook’s 20 member Oversight Board as a ‘Supreme Court’ of independent adjudicators while the NGO Article 19 advocates for the establishment of Social Media Councils that would have a similar function to press councils.
The Twitter initiative is interesting for pushing beyond these expert-led structures to give users a greater role in shaping the content norms of the platform. Wikipedia has shown that a community of users can self-govern and in the interest of the common good. Of course, the logic of a commercial platform like Twitter isn’t easily compared to a non-profit like Wikipedia and building a community of volunteers is no small task.