Will new regulatory powers address disinformation?

20 June 2024

Disinformation has always existed, but it rose to the status of a global problem in 2016 when some blamed it for the Brexit referendum and the election of Donald Trump. The prevalence of disinformation during the pandemic cemented the idea that false information is a serious threat to social cohesion. In 2024, the problem seems worse than ever. Advances in Artificial Intelligence (AI) means it is even easier to create and amplify manipulative content while national and international tensions provide fertile ground for disinformation to flourish. Earlier this year, experts consulted by the World Economic Forum ranked disinformation as the greatest immediate risk to global stability.

Expert concerns are mirrored by the public. In Ireland, 71% of DNR respondents reported feeling concerned about ‘what is real and fake on the internet’. When this question was first posed in 2018, the figure was 57%. A high level of concern is replicated across all age groups, although Irish 18-24s are less concerned (63%) than the over 55s (74%).

It is difficult to know what exactly drives this concern. In part, it may be the volume of disinformation, or perceived disinformation, people are exposed to online. When asked about the kinds of false or misleading information they encountered in the week preceding the survey, respondents indicated the Israel-Palestine conflict (38%), immigration (37%) and, surprisingly, Covid-19 (33%). Yet, exposure alone is insufficient to drive concern. It must be accompanied by a sense that the disinformation is harmful in some way.

In public discourse, disinformation is frequently cited as a factor in troubling developments including declining vaccine uptake and aggression towards immigrants. At the same time, there are regular warnings about the need to protect the integrity of elections from information manipulation and ‘deepfakes’ created by generative AI. Little wonder then that people are concerned. Yet, the real challenge is to maintain a clear understanding of the risks without overstating them. In a recent talk at the Institute of International and European Affairs, Lutz Güllner of the European External Action Service argued that we should be wary of overhyping the threat of AI and deepfakes threats when there is ample evidence that most disinformation is much more simple.

There is also a risk of over-hyping counter-disinformation measures. Headlines tell us that the EU is “cracking down” on big tech and has “tough disinformation laws”. This is true to an extent, but the media version of the story often creates a false impression about how it will happen. The misunderstanding seems to be the result of very effective framing on the part of the EU and wishful thinking on the part of those who don’t have the time or capacity to read dense EU policy documents.

The EU has two primary mechanisms to address disinformation. The EU Code of Practice on Disinformation is a voluntary initiative that encourages platforms to take measures to counter the spread of disinformation. Being voluntary, there are serious questions about its effectiveness. In contrast, the Digital Services Act (DSA) is a directive that sets legal rules. It entered into force in August 2023. Put simply, the DSA states that major tech platforms can be fined up to six percent of their annual turnover if they fail to remove illegal content or if they fail to protect users from online harm.

The important thing to note is that most disinformation is not illegal. It is considered ‘legal but harmful’. If a piece of content is illegal, it must be removed. Harm, however, is conceived more broadly as a consequence of platform design and practices. For example, the European Commission opened an investigation into whether Meta has failed to address “deceptive advertisements, disinformation campaigns and coordinated inauthentic behaviour”. This investigation is not about individual pieces of content. It’s about whether or not Meta has put sufficient resources into assessing and mitigating disinformation risks.

The difference here is between immediate and long-term impacts. Last March, the European Commission published a list of recommended measures for platforms “to mitigate systemic risks” during elections. If the platforms are unable to explain how they are assessing or mitigating risks, then they will be liable for fines. This will be a lengthy process. Ultimately, the DSA should reduce the volume of disinformation on major platforms, but it will likely take time

Publications

Reuters Digital News Report Ireland 2024

Type: Reports

Published in:

Authors:

Year: 2024

URL: Resource

Related Projects

Reuters Institute Digital News Report

The Reuters Institute Digital News Report is the world’s largest international comparative survey of the major trends in digital news consumption. It is widely used by industry, analysts, and researchers across the world. Since 2015, Coimisiún na Meán (formerly Broadcasting Authority of Ireland) has funded FuJo to undertake the analysis for the Irish report. Annual reports are available for: 2023, 2022, 2021, 2020, 2019, 20128, 2017, 2016, and 2015. ...

Participants

Dr Eileen Culloty

Dr Eileen Culloty is Deputy Director at the DCU Institute for Media, Democracy and Society (FuJo) and an Assistant Professor in the DCU School of Communications. She coordinates the Ireland EDMO Hub of the European Digital Media Observatory, which aims to advance research on disinformation, support fact-checking and media literacy, and assess the implementation of the EU Code of Practice on Disinformation. Eileen’s book, co-authored with Jane Suiter, Disinformation and Manipulation in Digital ...

This website uses cookies to ensure you get the best experience. The majority of the cookies used on this website are associated with analytics, collecting information about how visitors use our site. The cookies collect information in an anonymous form that does not identify an individual. Learn more
Current status: AcceptedDeclinedNot yet accepted