Evaluating social media compliance with European Commission-mandated transparency reports

23 August 2021

Words by Charis Papaevangelou

The last decade has seen the rise of technology giants, so-called “Big Tech”; along with them rose the responsibilities they carried and the critical voices demanding scrutiny and better accountability. One of the ways that Big Tech companies and, in particular, social media platforms have been trying to fend off regulation is through transparency reports, which are cumulative reports that share limited information and data primarily on content moderation actions that platforms have taken to keep problematic content away from their services. 

This model of corporate responsibility has also been embraced by public authorities. Since 2016, the European Commission has introduced voluntary measures like the Code of Conduct on countering illegal hate speech online (2016) and the Code of Practice on Disinformation (2018). However, not only is their effectiveness questionable, (as I’ll explain later) but the model of transparency reporting further consolidates the privatisation of regulation and increases platforms’ power. 

Throughout the COVID-19 pandemic, social media platforms have been trying to keep their services clean of mis- and dis-information related to the virus and the vaccines. Their efforts have been met with severe criticism by governments all around the world; President Joe Biden stated on camera that “they are killing people” in what appeared to be a sensationalist and exaggerated demand for Facebook and other platforms to do more. Content moderation has always been a complex task and the pandemic, which also rapidly shifted the burden to automated means from human moderators, has multiplied its complexity. 

So, in June 2020, the Commission issued the “Joint Communication in June 2020: Tackling COVID-19 disinformation – Getting the facts right.” This obliged the Code’s signatories (Facebook, Google, Mozilla, Microsoft, TikTok, and Twitter) to report on their policies and actions taken to address COVID-19 disinformation on a monthly basis; it also included the demand to promote authoritative news sources (e.g. the World Health Organisation), to let users know when they’ve encountered false information, to remove manipulative content like deepfakes, and increase scrutiny of advertising to avoid spreading and monetising disinformation. From August 2020 to April 2021, a total of 47 transparency reports were produced.

The European Regulators Group for Audiovisual Media Services (ERGA) was tasked with evaluating said reports and the overall compliance of platforms with the Code. As part of its role within ERGA, the Broadcasting Authority of Ireland (BAI) commissioned the Institute for Future Media, Democracy and Society at Dublin City University (DCU FuJo) to undertake research on the signatories' transparency reports. The forthcoming report analyses in-depth the 47 reports and has key findings to share with the public authorities that aim to assist policymakers in the work they’re doing with new regulatory frameworks for online content, like the Digital Services Act (DSA).

In this article, I’d like to share with you some of the findings we inferred by analysing the reports from a lexicological standpoint – basically what kind of words and phrases they use, how often, and how similar they were to one another. This is important to emphasise because it demonstrates how the lack of a standardised reporting format leads to each signatory reporting in an idiosyncratic manner and, thus, to incoherency. The forthcoming DCU FuJo report finds that considerable effort is required to understand what actions platforms have taken, how these actions relate to their policies or to the Code and the broader EU landscape. In particular, the free-text nature of the reports affords signatories the opportunity to produce repetitive and irrelevant information. 

We used IRaMuTeQ for this analysis, an open-source software developed by LERASS, a lab of applied social sciences at the University of Toulouse III. The reason why we ran this analysis was mainly to see how efficient the current format of the transparency reports is and what kind of information platforms include in them. Here’s what we found out:

The above is but a glimpse of the forthcoming report. The Commission’s “Guidance on strengthening the Code” is well timed to improve platforms’ efforts in combating mis- and dis-information and make them binding. The transparency reports should be an important resource for policymakers and the public alike. In their current format they’re not living up to the task, and that’s a problem that should puzzle both signatories and the authorities. In most cases the reports are repetitive, full of promotional material, with information and data that more often than not are irrelevant to the EU and its member-states. Certainly, as platforms themselves have argued, there is no ‘magic button’ that will automatically deliver data that are both useful and in the right structure. Nevertheless, a certain level of standardisation is necessary for effective monitoring. The Commission and regulators should take notice and ensure that this Code is worthwhile and is not treated as a communication exercise by signatories.

Related News

This website uses cookies to ensure you get the best experience. The majority of the cookies used on this website are associated with analytics, collecting information about how visitors use our site. The cookies collect information in an anonymous form that does not identify an individual. Learn more
Current status: AcceptedDeclinedNot yet accepted