The European Election campaigns may seem like a long way off, but media, governments and citizens need to act now to ensure a healthy debate. The European Commission has developed an Action Plan to address disinformation EU-wide, but the groundwork is needed locally.
Over the past few years, elections and referendums in the USA, UK, Ireland and elsewhere have been marred by disinformation and a lack of transparency around political campaigning.
So what are the headline risks on digital media during democratic elections?
- Digital gatekeepers like Facebook and Google controlling who and what appears online.
- Lack of transparency or regulation around political advertising.
- Coordinated misinformation campaigns on campaign issues.
- Coordinated trolling campaigns against political activists.
- Conscientious efforts to target mainstream media outlets to exploit the systems of reporting to inject misinformation or reframe issues aiming to platform discourses of far-right ideologies.
- The increasing popularity of closed platforms and the difficulty in identifying and addressing targeted campaigns.
- The volume of information is oceanic, swells during election campaigns and the distribution among users is rapid.
- New platforms and fractured digital audiences.
In short, there is the potential for a lot of misinformation, across a lot of digital platforms with a range of malicious actors manipulating the environment and it is challenging to ensure users get quality information.
Despite the lessons from the UK, USA and Ireland’s Referendum on the 8th, there are few official measures taken to address the potential problems that citizens face ahead of casting their votes European or Local Elections.
As research has shown that there are trigger topics that provoke an onslaught of disinformation and media manipulation to centralise far-right ideology in political discourse. The scenario is well established regarding refugees. For example, in debating Europe’s responsibilities towards refugees, and the attitudes to immigration, among some sectors this manifest as racism, toxic nationalism and hate speech. In Ireland, the elections are taking place alongside a referendum on the extension of the right to vote for the Irish diaspora. The discussion about who is Irish and who gets to shape Ireland future could be particularly potent.
Digital media has been fundamental in facilitating enhanced democratic discussion on a range of political and social issues. But the critical issue we have to address now is what happens when these platforms are exploited to manipulate the digital environment and change public perceptions of political parties or people, shape topics that are central to campaign debates and potentially affect the outcome elections?
So far many of the efforts to address these problems have been reactive. Fake news is often shared widely before it is debunked, or an advert has already reached the target market before it is recognised as inappropriate. But taking lessons from TRef and fact-checking during the referendum on the 8th and other initiatives in smaller media markets, proactive, real-time initiatives can be effective.
Broadly speaking, industry and academia have identified three main way proactive ways to address these issues: regulation, transparency and critical literacy. In Ireland there have been recommendations to enhance the powers of Standards in Public Office Commission (SIPO) to address digital political advertising, calls to enforce transparency from social media companies and development of media literacy initiatives. But, these broad remedies have been suggested since digital media shifted from democracy builders to destabilisers. It is time for more focused discussion.
The need to develop consensus on how political campaigns are run online is now critical. It is only by wading into the details that we can strike the right balance of safeguarding the core tenets of freedom of speech and encouraging political engagement while defending the rights of individuals and groups not to suffer harassment, threats, or abuse.
The quality of information in the public sphere and the quality of elections are central to democracies. Social media companies are now central platforms for informing citizens ahead of elections. Currently, the power to shape this environment lies with private social media giants that make important decisions regarding the composition of one of the core pillars of modern democracies.
Concerns regarding the distortion of topics in the social agenda are not confined to big-ticket issues like botnets. Any centralised control of multiple social accounts by campaigners can distort the perception of the popularity of a topic. This raises brass tacks questions. Should a social campaign managers be allowed to run 10 social media pages? Or 100? Similarly, in an era of globalisation how should we address advertising or campaigns that are run beyond the nation-state that the election will directly affect? We know that we need regulation, but what we need now is to start discussing the standards of practice around digital political campaigns.
Transparency around campaigns is necessary on two fronts. The first is from the campaign groups themselves which can be addressed to some extent by electoral regulation systems. Through these mechanisms, it is possible to implement fair rules that require all campaigns to provide details of their digital campaigns.
The second front is somewhat trickier – transparency from the social media companies. To be able to meaningfully develop solutions to threats to democracy, it is necessary to be able to identify the range problems, their origins as well as measure the scale and pervasiveness.
While concerns regarding disinformation and digital manipulation have been headlines for a while – we still don’t have suitable insight into the scale or distribution of the problems. We rely on internal reviews and communications from social giants. There is very little independently verified information regarding what exactly social, and other digital media platforms can control and what resources, both human and technological they have to address these problems.
Ideally, social media companies would enter into a dialogue with new electoral bodies to offer a comprehensive description of their capabilities in this regard, which should be inspected and audited for veracity – improving the quality of the environment and potentially enhancing platforms reputations as sources of news.
Lines of Defence
The front line of defence for citizens should act as a safety net that is proactive and public facing. The need for informational infrastructure that can address these issues, that is proactive rather than reactive and can monitor, flag and engage with the plethora of social media companies to help platforms address issues before they become acute.
There are some such initiatives like fact checkers working with Facebook. But the public reach is unclear and likely uneven. Full Fact in the UK is independent, public interest and public facing as well as distributing through mainstream media. But not all operate the same way. Other initiatives monitor and flag the trolling and disinformation campaigns but are sometimes private and the information shared is media business to media business. They can be picked up and published by mainstream media but within the context of meeting editorial needs.
The European Commission has developed an Action Plan to tackle disinformation that recommends national contact points that would participate in the Rapid Alert System to facilitate common situational awareness and coordinated responses. To tackle disinformation, the Commission argues for the establishment of multidisciplinary teams in each member state with specific knowledge about local information environments to engage in the Rapid Alert System. Additionally, it aims to fund media literacy programmes and supports for good quality journalism to improve the quality of the information environment
The last line of defence should be election postmortems – picking up on any remaining issues that may have slipped through real-time monitoring. Volunteers efforts to address the public need can only go so far for so long – action and investment are needed to maintain the quality of the digital environment at election times. And the time-frame for establishing something ahead of the European and Local election is short.
But the fatal flaw of considering this to be a technological problem with technological solutions must be avoided. The manipulation of media for political gain, propaganda and toxic nationalism existed long before the digital era, and these issues are social, cultural and political too.
None the less, with the European and Local Elections looming along with a debate about who can particulate in Irish democracy – rapid action and investment are needed.
FuJo member Niamh Kirk was featured in a segment on RTE’s This Week discussing disinformation, manipulation of digital media and regulation ahead of the European Elections this May. The section was produced and presented by Justin McCarthy featured Craig Dwyer – Fora, Liz Carlon – Tref, Dr Kevin Cunningham – DIT and Niamh Kirk – FuJo. The segment discussed elections and digital media, considering the landscape in Ireland and Europe ahead of the elections. You can listen to the episode here.