The case studies of CovidCheck highlight significant issues for signatories

08 October 2021

Traditionally, the existence of conspiracy theories was generally consigned to the outer fringes of society; something you only discovered if you sought to discover it. 

Coupling the rise of Big Tech (namely social media platforms) with various significant public moments in the past decade, such as the election of former US President Donald Trump, the underground ecosystem of conspiratorial actors began to rise to the surface and find a more public, open space. 

To mitigate the rise of conspiracies and disinformation existing in the online world, the European Commission introduced its Code of Practice on Disinformation in 2018. This voluntary framework sets commitments and standards for the signatories of the Code to implement to combat disinformation on their services. 

Fast forward to 2020, and the onset of COVID-19 created a perfect storm for false information to cross borders and proliferate at an unprecedented rate. This was partially expected, as periods of mass uncertainty within society is fertile ground for alternative truths to explain the crisis at hand, as explained by Vox. 

While social media platforms and online services tightened their own self-regulatory measures to fight the increasing existence of COVID-19 disinformation, the European Commission launched a Joint Communication in June 2020 to enhance, and more importantly monitor, these additional efforts. 

The Joint Communication, “Tackling COVID-19 disinformation – Getting the facts right”,  obliged the Code’s signatories (Facebook, Twitter, Google, Microsoft, Mozilla and TikTok) to produce monthly transparency reports on how they were tackling COVID-19 and later vaccine disinformation. 

This included demands to promote authoritative content from leading public health organisations; to improve users’ awareness when encountering false information; and to tackle the monetisation of disinformation through online advertising. 

As part of its role within ERGA, the Broadcasting Authority of Ireland (BAI) commissioned FuJo to undertake research on the signatories’ transparency reports. The CovidCheck report analysed the 47 transparency reports and provides case studies on Facebook and TikTok to verify their reported actions taken against disinformation.

Here, a summary of the key findings is presented in relation to these case studies.  

Facebook 

Although each signatory was asked to provide EU and member state data, that is, to provide a breakdown of their policies related to COVID-19 disinformation for each EU country, this commitment has generally not been observed to date. Instead signatories tended to provide aggregated data, such as “24 million views” in their transparency reports. 

In the absence of this country-level data to verify reported actions, DCU FuJo cooperated with the Institute for Strategic Dialogue (ISD) to undertake a case study of Irish Facebook Groups and Pages known to propagate COVID-19 vaccine misinformation. The analysis sought to verify the implementation of Facebook’s reported actions and policies and was made possible through Facebook’s public insights tool CrowdTangle. 

In total, we examined posts contained in 37 Irish Facebook Groups and 49 Irish Facebook Pages that were identified as proponents of COVID19 vaccine disinformation. 

Across our analysis period of January to April 2021, we sampled one week of content per month. Our analysis aimed to verify reported actions in four areas, namely, factchecking, content labelling, content removals, and informing users.


Total active communities Total posts
Groups 37 219
Pages 49 127

Factchecking

In terms of Facebook’s factchecking programme, numerous inconsistencies were identified, many of which create anomalies from the perspective of users and disinformation actors. 

Firstly, the analysis identified 35 posts with false claims that had been debunked by Facebook’s factchecking partners, but the factchecks were not applied. This points to a wider problem contained in the signatories' response to factchecking, as many of these claims had been factchecked elsewhere when posted in different formats. This is despite Facebook having similarity detection tools to mitigate such a problem. 

This trend became evident in the case of the active ant-lockdown campaigner depicted below, who presented claims about people dying as a result of the COVID-19 vaccines; these claims were factchecked by Facebook factchecking partners at the time of circulation. However, as shown below, a factcheck was applied when a user shared the video by posting a  BitChute URL (left)  but not in a separate instance where a user shared a link to a website hosting the same video (right).

Additionally, Facebook’s decision not to factcheck political actors under its Programme Policy can allow disinformation to be posted unchecked. In the case of the prominent anti-lockdown figure studied above, campaign posts were observed on Facebook channels affiliated with the figure during her time as an election candidate, but were ineligible for factchecks despite their anti-vaccine sentiments. Factchecking efforts could only be resumed post-election, and even in that case, similar claims went without a factcheck.

Content labelling 

In March 2021, Facebook announced it would start adding labels to posts about COVID-19 vaccines to “point people to the COVID-19 Information Centre” and to “show additional information from the World Health Organisation”. As such, these labels should be visible on all posts about COVID-19 vaccines. Upon review of the March and April posts to confirm said policy, various inconsistencies were identified.

 Firstly, we found that 39 of the 53 posts about COVID-19 vaccines in the Groups were labelled, while 56 of the 66 posts in the Pages were labelled. Moreover, similar posts carried a label in some instances, but appeared without a label in others. This became particularly relevant in the case of links to third-party websites.

For example, the image below shows two versions of the same story and from the same source. They were posted to the same page one day apart, but only one has a label. 

It is also worth noting that users can choose not to view these labels by clicking the ‘x’ function in the top right-hand corner of the label. 

Content removal

In December 2020, Facebook announced that it would start removing false claims about Covid-19 vaccines that have been debunked by leading global health organisations. This includes false claims about the safety, efficacy, ingredients, or development of vaccines and vaccine conspiracy theories. 

Notably, Facebook does not provide specific definitions for ‘content’ and ‘claims’ in this policy. There is a lack of clarity around whether this just applies to posts, or whether it takes the comments sections into consideration. However, other studies have indicated that this policy does apply to comments, and as such, violating comments should be removed. As a result, this analysis took content that appeared in both Facebook posts and comments sections into account.

Of the 219 posts about COVID-19 vaccines in the Groups, 34 were deemed to merit removal. An additional 41 posts hosted violating comments. Of the 127 posts about COVID-19 vaccines in the Pages, six were deemed to merit removal and an additional 52 posts hosted violating comments. Violating content was more common in Groups than Pages and in the comments section than posts.


# Posts about COVID-19vaccines # Posts deemed to contraveneFB policy # Comments section hosting contentdeemed to contravene FB policy
Groups 219 34 41
Pages 127 6 52

​​For clarity, when the content in a post and the accompanying comments were both found to be in violation, this is reported as one instance. Similarly, when there are multiple violating comments under a post, these are grouped as one instance. 

Below are some examples of posts and comments judged to violate Facebook’s policy and thus merit removal. Notably, some violating comments appeared under posts that did not violate content policies such as factual news articles. This points to the need for a framework to be developed to moderate the comments sections as these harmful comments are not just confined to harmful posts, they blur the lines of factual posts too. 

Despite some comments moderation tools being available to Group admins, the admins of the examined Groups were often proponents of COVID-19 disinformation themselves, and regularly posted violating content or content that was later factchecked. As such, they are unlikely to sanction users who post disinformation in comments.

Informing users 

Finally, in terms of improving user awareness, Facebook announced in April 2020 that it would apply an “educational pop-up” to COVID-19 related Groups.

This policy did not reference Pages, however, our analysis indicates that Facebook irregularly applied these labels to some Pages we examined. Overall, the vast majority of Groups and Pages did not receive this pop-up despite posting daily about COVID-19 and vaccines: In 22 sampled Groups three displayed the pop-up while in 18 Pages, five displayed similar.

Facebook also announced in March of this year that it would alert users before they joined a Group that had violated Facebook’s Community Standards. The warning was shown in 18 of the 22 groups we examined and in just  6 of the 18 Pages we examined. Considering the same level of vaccine misinformation was observable in Pages as in Groups, it would seem reasonable for these labels to be systematically applied to Pages too. 

In that same announcement, it was also noted that Facebook would “limit invite notifications” for violating Groups so people were less likely to join. An analysis conducted in May 2021 found that some Groups welcomed no new members over the course of the month. However, one Group welcomed approximately 1,723 new members. This Group did not display an educational pop-up nor did it notify users about the Group’s violation of Community Standards despite requiring such. 

The above mini-report consolidates some of the key findings that are contained in the CovidCheck report. A similar more limited analysis was conducted on TikTok where similar inconsistencies were identified, particularly in the areas of content labelling and the promotion of authoritative content. Like the Facebook study, the comments section was a key source for disinformation.

FuJo have been invited to participate in workshops from the European Regulators Group for Audiovisual Media Services (ERGA) as they build upon the report findings to issue their own recommendations. The report is envisioned to coincide with the guidance issued to strengthen the Code so signatories can review best practices and implement them to ensure they are meeting their commitments in warding off harmful disinformation on their services. Indeed recommendations such as comments section moderation is outlined in the report and should be considered by relevant signatories and stakeholders to plug the existing hole that allows disinformation to thrive. 

Related News

This website uses cookies to ensure you get the best experience. The majority of the cookies used on this website are associated with analytics, collecting information about how visitors use our site. The cookies collect information in an anonymous form that does not identify an individual. Learn more
Current status: AcceptedDeclinedNot yet accepted