Whether it be the Irish, British or Australian forays into Online Harms protection, the approach remains the same: public good is being considered in the aftermath of harm already experienced, as a correction to failure.
The recent sharp focus on damage to users caused by harmful content has seen a plethora of debates leading to new regulatory proposals including Ireland’s Online Safety and Media Regulation Bill (OSMR). The aim of such regulatory mechanisms is to tackle content which causes harm to users, but is not in itself illegal. In effect, policymakers and regulators must define what amounts to harmful but legal or “lawful but awful” content. This creates a rather large grey area regarding what is harmful outside of the realm of illegal content. Due to the novelty of such a concept, there are very few other regulatory mechanisms which exist in practice. It is interesting then to compare the Irish approach with that taken in Australia and the UK and the EU’s upcoming Digital Services Act (DSA). Although these regulations raise important issues regarding freedom of expression and the potential to incentivise overblocking, those are not considered here.
Defining Harm in Ireland’s OSMR Bill
The Irish government first published its “dangerously vague” proposal for tacking online harms in 2019. Since then, there have been rounds of submissions and consultations including a joint submission by the DCU FuJo Institute and the DCU Anti-Bullying Centre. In its current form, the OSMR Bill attempts to define specific kinds of harmful content while implementing a framework to identify other kinds of harmful content in the future. It defines “harmful online content” summarily as: cyber-bullying; content which promotes eating disorders; and content which encourages (or details methods of) self harm or suicide.
Attempting to define all forms of harmful content within one act would be a mammoth task. This bill fails to ponder much on that challenge by suggesting harms which lend themselves to a very particular kind of moral panic. Few will find fault with the aim of reducing cyber-bullying, eating disorders, or self-harm. However, while attempting to curb “content promoting eating disorders”may be a noble aim, it may not be realistic in practice. Such a broad definition is open to subjectivity and interpretation. Consider whether any of the following could be included: a webpage titled “how do I make myself lose weight fast?”; content which causes feelings of body dysmorphia, or content that triggers such feelings unintentionally such as, for example, fitness pages on Instagram.
Instagram is a notable concern given revelations about the company’s internal research on harm to teenage girls. These studies found that Instagram worsens body-image issues for “one in three teen girls”; that “teens blame Instagram for increases in the rate of anxiety and depression”; and that 13% of British users “traced the desire to kill themselves to Instagram”. A liberal interpretation of the OSMR Bill would lend support to an argument for banning access to much Instagram content.
In response to these issues, various stakeholders have given their perspective on defining ‘online harms’; in many cases to a higher standard than that found in proposed bills. One succinct attempt comes from Digital Action’s online harms taxonomy, which details 41 rights that can be infringed by harmful content online. This taxonomy does not attempt to distinguish between legal and illegal content and instead focuses on the rights being infringed, the impact, and examples of the harms. This taxonomy is not without flaws, but it does give an idea of the failure of the Irish OSMR Bill to create a succinct, defined and effective scope detailing the extent of online harms.
Australia’s Online Safety Approach
Australia’s Online Safety Act and eSafety Commissioner has received much attention. The Irish focus on defining specific types of harmful content is similar to that taken in Australia in recent years. Much of this work was initiated in response to two cases: terrorism and domestic violence. A bill in response to the 2019 Christchurch attack targeted material of “abhorrent violent material”, and “technology-facilitated domestic violence”. For the most part however, the Australian approach has focused on ensuring that “standards of behaviour online… reflect the standards that apply offline”. Rather than looking at harmful, but legal content, as is seen in Ireland, Australia’s ‘online harms’ regulation has shied away from directly addressing the grey area which Ireland’s attempts to shine a light on.
The UK’s Online Harms Bill
Initially, in the UK’s ‘Online Harms White Paper’ harmful content was focused almost entirely on access to illegal content or crimes which are committed through internet use. The only focus on access to harmful, but legal content was seen in cases of “underage exposure to legal content”, which covers children accessing pornography or “inappropriate material”.
A different approach was taken in the ‘UK Online Safety Bill’, where a broad definition was used in targeting content which held “material risk” of “a significant adverse impact” on users. This broad definition is refined in terms of the risk that the content’s dissemination would have a “significant adverse physical or psychological impact on an adult or child of ordinary sensibilities” and it takes into account the “number of users that may… have encountered the content” and “how easily, quickly and widely the content may be disseminated by the service”. This definition allows for a much broader scope than its Irish counterpart, but perhaps it is so broad that it fails to give clarity as to exactly what sort of content ought to be targeted, as such failing to meaningfully narrow the grey area which exists in defining such content. Unsurprisingly, there are wide ranging criticisms of the UK Bill, which was introduced on March 17th.
Addressing the source of harms
Regulatory mechanisms to curb online harms attempt to put out fires caused by social media without ever meaningfully attempting to address the source of such harms. As many others have argued, there is a policy failure to adequately understand the issues at hand. This failure has a number of overlapping factors including a lack of understanding of internet structures and online markets; regulatory capture by tech lobby groups; and an ideological belief that innovation and the public interest are one in the same.
If we are to meaningfully address harm to users, we must identify the root causes rather than ask platforms to cooperate in firefighting missions.
There are pronounced inequalities in the online environment, which call attention to the power structures of the internet, the dominance of a limited group of platforms, and the impact that their decisions have on users and the public interest. To address this a common understanding must be reached on what the public interest is in the context of digital technologies and how it should or could be served. Notably, OSMR is also about media regulation but there is no vision for what media (traditional and digital) should look like in the 21st Century. Unless policymakers take on these big conversations, it is difficult to see how the dominance of Big Tech will be challenged in anything but superficial ways.
Addressing the source of online harm requires an acute understanding of how platforms create and extract wealth and a broader vision from policymakers on how they can positively control and build a digital economy and society which prioritises user safety. The obfuscation of innovation and technological advancement – with user safety and rights as a price-to-pay for such technology – risks regulators paying for such innovation without an understanding of the consequences. Whether it be the Irish, British or Australian foray into Online Harms protection, the approach remains the same, public good being considered in the aftermath of harm already experienced, as a correction to failure.
Regulators and policymakers must be bold and challenge the current internet landscape by making public good an objective in and of itself, by attempting to create proactive protections for user rights, and curb existing and growing violations. Without an understanding and analysis of the wider implications of Big Tech, any policy which is produced is at best reactive and limited and at worst misguided and may potentially exacerbate the harm it aims to reduce. Large online corporations and the platforms that they offer users include design choices, which are not incidental, or neutral, nor is the impact that they have on users. Harmful content which exists on platforms would not exist without the platforms, and would not exist to the degree that it did without the design choices of such platforms allowing it to do so. Conversely, these platforms exploit users through nudging, coercing and manipulating biases and interests in order to maximise revenue. The existence of harmful but legal content or other modern issues are comparable to a fire in the way in which regulators respond to them, but they are more akin to arson, as these online platforms ignore the realities of the existence of such content, even when it is proven that they know the detrimental impact it can have.
The Digital Services Act offers some potential respite in its claims that it will “enhance the accountability and transparency of algorithms and deal with content moderation”, as well as introducing obligations to assess and mitigate risks, including stated aims to allow user choice in profiling, and input in algorithmic processing by platforms. Despite this, we can see the remnants of a traditional interpretation of public good, as user rights obligations are set aside for “micro and small enterprises” in the interests of fostering innovation, despite the relative ease by which online platforms can grow exponentially in a short space of time.