Scenario 1. A crucial election or referendum is approaching. You feel very strongly about the issues involved and decide to actively participate in the election, campaigning for an independent candidate. Your funding and other resources are very limited. Since everyone is online, you decide to go online too. To reach all these people with a minimal budget, you decide to automate the task. You write a script that creates Twitter accounts programmed to start following people, triggering follow backs, and then tweet and retweet your messages using hashtags and mentions. In this manner, your online presence is amplified, your message more likely to reach more people and all this within your budget. These ‘bots’ helped towards leveling the political playing field.
Scenario 2. You are caught in an industrial dispute that has become acrimonious. The other party in the dispute has infinitely more resources than you, the media seem to be supporting them, and your position is given minimal publicity. Someone in your team proposes that you use Twitter to put your message across. To reach a broader public, you write a script that creates Twitter accounts that start following other accounts, and then begin to tweet and retweet your messages using these accounts. In this manner, your position is made clear, communicative balance is restored and people are better informed.
Scenario 3. Election time. A foreign power with ulterior motives and interests sets up a series of Twitter accounts which then begin tweeting and retweeting messages, thereby interfering in the national election through selectively amplifying and promoting a certain position.
All three scenarios will involve illegal activity if the proposed Online Advertising and Social Media (Transparency) Bill 2017 goes ahead. The difference? Only those in scenarios 1 and 2 will be liable to pay fines and/or face prison time, while the foreign power will be beyond reach. So the question is: is this bill safeguarding the political public sphere? If so from whom? The reality is that we don’t yet know that much about how bots operate and there is very little evidence to support the general assumption that bots and fake accounts have direct persuasive effects.
The concern with the health and toxicity, openness and transparency of the digital public sphere is legitimate. We have seen levels of hate, racism and misogyny proliferate in online environments. We have seen unverified information circulating as fact. We have seen reports of foreign powers seeking to interfere in elections and referenda in the US and the UK. And the use of technology for malicious political purposes is not likely to stop any time soon. Malicious actors are getting more and more savvy. There are sockpuppet bots, part-human, part-machine, steering conversation and planting ideas; approver and amplifier bots, which like, reply/mention and retweet looking to enhance credibility and spread certain terms and ideas further; and of course troll-bots that harass and attack specific accounts by journalists and/or activists. And there is evidence that on Twitter, almost half the traffic is bot-driven – though not all of it necessarily malicious. In Ireland, journalists like Philip Boucher-Hayes report that from around last August or so their Twitter following has increased to an improbable rate. While there is no clear understanding of how or why this happened, the impending referendum on repealing the 8th amendment may have a role to play. These are matters of concern and we must begin thinking about resolving them. Regulation could be one of way of addressing these. Development of critical skills could be another.
If regulation is the way to go, then it is imperative to consider the reach and unintended consequences of such regulation. As we have seen above, while the proposed bill is looking to prevent interference in elections, its unintended consequences may include penalizing legitimate uses of automated accounts or bots, while it is very difficult to actually control those operating bots from outside Ireland.
Similarly, the Irish law on defamation seeks to protect the reputation of persons and rightly so. However, the unintended consequences of this is to introduce a division in the public sphere, whereby people can say whatever they want in social media while news publishers are faced with extensive and punitive damages. Here at FuJo, we are developing HateTrack, an online tool that scores comments as likely or unlikely to contain racially toxic speech. We hope in the future to extend this to misogynistic contents. Perspective, supported by Google, is another AI tool that engages with toxic and uncivil contents. Yet none of these can be used in the Irish context, because if they so much as let one (defamatory) comment slip, the news publishers may face financial ruin. The legal context in this instance is hindering the de-toxification of the digital public sphere.
Yet regulation can be very useful in ensuring the health of the mediated public sphere, through legislating for pluralism. Research by FuJo’s Roddy Flynn shows that in Ireland current legislation does not prevent the concentration of media at the hands of very few owners, it does not create incentives for non-profit media to enter the media field, and does not offer employment protection to journalists. All these factors undermine media pluralism and diversity, which we know are essential for a healthy public sphere.
Critical media literacy could be another way of addressing the emerging dark side of the digital public sphere. The BAI launched a Media Literacy Policy that looks to encourage research and development of relevant skills among the public. Research on media literacy here at FuJo shows that while journalism students rely on heuristics, for example sleekness of website, to gauge credibility, secondary education children tend to examine contents in more detail. Further research is necessary, but the results seem to indicate that while we apply our critical skills in our teens, we begin to take shortcuts when we become older. The question of developing a range of skills, and maintaining them, is crucial.
On the other hand, as we all face time pressures, we tend to be ‘cognitive misers’, i.e. we seek to save time and effort when applying our judgment (Fiske and Taylor, 2016). In other words, we would all take shortcuts if these were available to us.
To this end, there are efforts by social media corporations, but also by research funding bodies such as the EC H2020 programme, to develop technological solutions that may help readers. For example, blockchain technology can be used to keep a record of the provenance of a particular piece of information, thereby allowing its almost immediate verification. Social media platforms themselves can develop innovative new ways of countering misinformation through for example using signals to indicate an automated account, just as they use blue ticks to indicate the authenticity of certain high-profile accounts.
Just as regulation alone, neither technological innovation nor critical skills deployed on their own are enough to deal with some of the monumental challenges posed by the new informational environment. This is especially the case as malicious actors tend to become more and more imaginative in their (mis)use of social media platforms. But together, they represent something much more formidable especially if they are made to complement one another. State and EU-based regulation, self-regulation by social media platforms, technological innovation supported by scientific research, and the development and inclusion of critical media literacy in educational curricula and beyond, can go some way towards tackling the toxicity in the digital public sphere.
However, these can only go so far. After all, toxicity is a symptom of an existing pathology in our societies, a pathology that is directly linked to rising levels of inequality within and across countries and continents. Unless we begin thinking of how to address this fundamental problem, other measures, important though they may be, will have much the same effect as giving painkillers to cancer patients: offering palliative but not restorative care.