Words by Charis Papaevangelou
The last decade has seen the rise of technology giants, so-called “Big Tech”; along with them rose the responsibilities they carried and the critical voices demanding scrutiny and better accountability. One of the ways that Big Tech companies and, in particular, social media platforms have been trying to fend off regulation is through transparency reports, which are cumulative reports that share limited information and data primarily on content moderation actions that platforms have taken to keep problematic content away from their services.
This model of corporate responsibility has also been embraced by public authorities. Since 2016, the European Commission has introduced voluntary measures like the Code of Conduct on countering illegal hate speech online (2016) and the Code of Practice on Disinformation (2018). However, not only is their effectiveness questionable, (as I’ll explain later) but the model of transparency reporting further consolidates the privatisation of regulation and increases platforms’ power.
Throughout the COVID-19 pandemic, social media platforms have been trying to keep their services clean of mis- and dis-information related to the virus and the vaccines. Their efforts have been met with severe criticism by governments all around the world; President Joe Biden stated on camera that “they are killing people” in what appeared to be a sensationalist and exaggerated demand for Facebook and other platforms to do more. Content moderation has always been a complex task and the pandemic, which also rapidly shifted the burden to automated means from human moderators, has multiplied its complexity.
So, in June 2020, the Commission issued the “Joint Communication in June 2020: Tackling COVID-19 disinformation – Getting the facts right.” This obliged the Code’s signatories (Facebook, Google, Mozilla, Microsoft, TikTok, and Twitter) to report on their policies and actions taken to address COVID-19 disinformation on a monthly basis; it also included the demand to promote authoritative news sources (e.g. the World Health Organisation), to let users know when they’ve encountered false information, to remove manipulative content like deepfakes, and increase scrutiny of advertising to avoid spreading and monetising disinformation. From August 2020 to April 2021, a total of 47 transparency reports were produced.
The European Regulators Group for Audiovisual Media Services (ERGA) was tasked with evaluating said reports and the overall compliance of platforms with the Code. As part of its role within ERGA, the Broadcasting Authority of Ireland (BAI) commissioned the Institute for Future Media, Democracy and Society at Dublin City University (DCU FuJo) to undertake research on the signatories' transparency reports. The forthcoming report analyses in-depth the 47 reports and has key findings to share with the public authorities that aim to assist policymakers in the work they’re doing with new regulatory frameworks for online content, like the Digital Services Act (DSA).
In this article, I’d like to share with you some of the findings we inferred by analysing the reports from a lexicological standpoint – basically what kind of words and phrases they use, how often, and how similar they were to one another. This is important to emphasise because it demonstrates how the lack of a standardised reporting format leads to each signatory reporting in an idiosyncratic manner and, thus, to incoherency. The forthcoming DCU FuJo report finds that considerable effort is required to understand what actions platforms have taken, how these actions relate to their policies or to the Code and the broader EU landscape. In particular, the free-text nature of the reports affords signatories the opportunity to produce repetitive and irrelevant information.
We used IRaMuTeQ for this analysis, an open-source software developed by LERASS, a lab of applied social sciences at the University of Toulouse III. The reason why we ran this analysis was mainly to see how efficient the current format of the transparency reports is and what kind of information platforms include in them. Here’s what we found out:
- Facebook and Google prioritised the removal of manipulative coordinated action to spread disinformation. However, the vast majority of these so-called “inauthentic behaviour networks” didn’t have to do with the EU. So, it’s safe to assume that they were included in the transparency reports as a way of touting their work in spite of its irrelevance. That’s also a shortcoming of the current Code, as it does not clearly demand platforms to report on a member-state level, which leaves plenty of leeway for platforms to do as they please. Facebook’s reports were also quite repetitive, while Google’s were nearly identical. This does not help in reading them and gives off the impression of sloppy work, as they become redundant.
- Microsoft repeatedly advertised their sponsorship of NewsGuard, a fact-checking tool, that made the extension free-of-charge for users of Microsoft’s browser, Edge, as well as the integration of NewsGuard’s services to Microsoft’s search engine, Bing. Again, this can be seen more as a self-promotional stunt rather than meaningful information on tackling COVID-19 disinformation in the EU and its member-states. Microsoft’s reports were also almost identical, as was the case with Facebook and Google.
- Mozilla is a special case; they only filed two reports and, in our collective unconscious, it’s more of a civil society organisation rather than a greedy Silicon Valley corporation. In any case, Mozilla decided to prioritise actions taken related to its Firefox web browser, as well as its bookmarking service, Pocket. Specifically, they argued how they tried to promote authoritative news sources as reading recommendations.
- Twitter decided to show off the work that they’ve done with the research community and, in particular, with the increased means of access to data they’ve provided researchers and academics with. While as researchers we find this welcoming and wait for other platforms to follow suit, it does not actually mean enough within the context of the Code.
- Surprisingly, TikTok, the latest platform to sign up to the Code, was the only platform to report on actual metrics and data that concerned specific member-states. TikTok’s transparency reports were also rather short compared to others, without much repetition, which helped reading them and retrieving useful information easily.
The above is but a glimpse of the forthcoming report. The Commission’s “Guidance on strengthening the Code” is well timed to improve platforms’ efforts in combating mis- and dis-information and make them binding. The transparency reports should be an important resource for policymakers and the public alike. In their current format they’re not living up to the task, and that’s a problem that should puzzle both signatories and the authorities. In most cases the reports are repetitive, full of promotional material, with information and data that more often than not are irrelevant to the EU and its member-states. Certainly, as platforms themselves have argued, there is no ‘magic button’ that will automatically deliver data that are both useful and in the right structure. Nevertheless, a certain level of standardisation is necessary for effective monitoring. The Commission and regulators should take notice and ensure that this Code is worthwhile and is not treated as a communication exercise by signatories.