FuJo and the Provenance project have made a submission to the European Commission’s public consultation on the Digital Services Act (DSA).
The DSA is the EU’s ambitious plan to regulate the online environment across a range of areas including disinformation, political advertising, and illegal goods and content. The Commission is expected to present the legislative package before the end of the year. It is widely anticipated that this package will set out new rules for platform liability and content moderation and introduce sanctions for non-compliance.
Under the 2000 e-Commerce Directive, platforms are exempt from liability for the content they host. Such rules were designed to encourage the growth of online businesses and services, but the online environment has changed dramatically since then. It is now dominated by a small group of companies - Amazon, Google, and Facebook – while antitrust and consumer protection authorities have struggled to keep pace.
As outlined below, the FuJo and Provenance submission focuses primarily on the consultation sections that relate to disinformation.
Platform actions to minimise risk
Online platforms have taken steps to reduce the visibility of misleading advertising, scams, and disinformation. During Covid-19, the major platforms increased their efforts by directing users to official information sources and banning adverts for some medical products. Overall, however, there is little transparency regarding the decision-making behind the platforms’ actions or how they evaluate the effectiveness of those actions. Some disregard the evidence and best-practice ascertained by independent researchers. For example, those who interact with Covid-19 disinformation on Facebook receive generic messages and links to the World Health Organisation (WHO): contrary to best-practice, there is no indication of what content instigated the message and no refutation of the false claims.
Under the EU Code of Practice on Disinformation, the major platforms pledged to increase transparency around advertising and to develop effective content policies for misleading information. The EU’s own evaluation of this self-regulatory Code identified major shortcomings including the inability to sanction platforms for non-compliance. Taken a whole, the platforms’ actions are piecemeal, inconsistent, and lack independent scrutiny. Importantly, platform actions tend to be concentrated in Europe and North America rather than worldwide.
The appropriateness of platform actions
Actions to minimise risks must be grounded in evidence and subject to evaluation. In addition, these actions require close scrutiny and oversight because they have knock-on consequences for all sectors that have become dependent on the platforms’ distribution and advertising channels (politics, journalism, business, and so on). More fundamentally, efforts to detect problem content and increase the transparency of advertising do little to address the underlying issues associated with the ad-tech industry, which underpins the business model of online platforms. The ad-tech industry generates revenue for bad actors, but there is little regulation over the use of personal data for targeted and programmatic advertising or real-time bidding (RTB).
Addressing online disinformation
It is relatively easy to create and distribute disinformation and other harmful content. Disinformation is created by a range of actors for political, ideological, and financial reasons and they can distribute disinformation at scale by manipulating algorithmic processes. Addressing financial incentives (i.e. the ad tech industry), increasing capabilities for detecting disinformation and ‘inauthentic’ behaviour’, and helping users evaluate content are all important for counteracting this problem.
Of course, any actions to counter harmful content must also uphold the values of freedom of expression and diversity of opinion as both are crucial to democratic societies. Protecting these values does not mean there must be no restrictions on online content; it means that restrictions need to be appropriately tailored and that those restrictions require robust oversight mechanisms. Currently, the platforms rely heavily on algorithmic methods to moderate content. While algorithmic methods have the advantage of rapidly addressing large volumes of content, there are major concerns about their reliability and the lack of oversight. For example, during Covid-19, the platforms acknowledged that groundless content removals increased because the absence of human moderators made them more dependent on algorithmic decisions.
In less exceptional times, human oversight is typically outsourced to poorly resourced contractors who are ill-equipped to address the diversity of world cultures and languages. As such, current platform practices are insufficient to guarantee democratic integrity, pluralism, and non-discrimination. External oversight is required to ensure disinformation countermeasures are appropriate, fair, and effective.
At the same time, the platforms have largely declined to share relevant data with independent researchers. Consequently, independent researchers and policymakers are unable to determine the true scale and impact of online disinformation and to assess the effectiveness of interventions by the platforms and others. This is a significant obstacle because there are major conceptual, practical, and regulatory difficulties surrounding efforts to counteract disinformation. Definitions of disinformation vary, the volume of disinformation content is enormous, and there are ethical and democratic implications to any regulation of speech. In addition, research on how to counter disinformation is relatively new so greater access to data is a prerequisite for understanding the problem and greater accountability for platform actions is a prerequisite for ensuring the protection of democratic norms.
Cooperation between platforms and authorities during crises
The Covid-19 pandemic underscored the need for reliable information. However, such crises should not be a pretext for eroding freedom of expression. As noted above, a clear understanding of the disinformation problem is impeded by lack of adequate data from the platforms. Consequently, there was insufficient evidence to support decision-making about the response to Covid-19 disinformation. The pandemic presented an opportunity to study how disinformation spreads and the effectiveness of interventions and content moderation systems. Compelling platforms to share data would go a long way to ensuring preparedness for future crises.
Transparency and content recommendations
Recommendation algorithms are powerful because they influence what people see online. Yet, there is little accountability and transparency for end-users and content publishers. Although most platforms do provide some information to explain why content is shown to particular users, this information is very limited. Moreover, increasing transparency does not address the more important issue of accountability. Most people do not understand how algorithms work so providing more transparency about a highly complex system is insufficient. In contrast, accountability mechanisms could open up algorithms to independent audits and scrutiny. This is important for end-users generally as there are concerns that recommendation algorithms play a role in radicalisation and it is important for content publishers who are heavily dependent on platforms and subject to sudden algorithm changes, which impact their ability to reach audiences and earn revenue for the content they create.
Erroneous content removals
Platforms do not provide
appropriate data about content takedowns, account bans, appeals procedures, or
the outcome of appeals. In addition, the development of large-scale centralised
systems for content removals is a concern. For example, the major platforms
have cooperated on the Global Internet Forum to Counter Terrorism (GIFCT) to
match uploads of terrorist content, but there is little
accountability regarding how content is entered into the database, how much
content is flagged incorrectly, or how many appeals are issued and upheld. Many
human-rights advocates have raised concerns about the erroneous classification
of terrorist content and the potential loss of valuable human-rights records. To
counteract these issues, there are credible proposals for increasing
accountability around content moderation. For example, the Santa Clara Principles (2018) set out
basic standards of transparency while Article
19 propose the development of multi-stakeholder ‘social media councils’
that could advise on standards and offer users a means of appeal.
The consultation is open until September 8th.