Since the US Presidential re-election of Barak Obama in 2012, social media platforms have become yet another forum where elections are fought. The strategies needed to use platforms like Facebook, Google, Twitter and others, are fairly straightforward. Candidates prepare messages as text, image, video or a combination, choose who they wish to see those messages by specifying demographic criteria like gender, age group, geographic location, and others, and then pay for their adverts to run.
This is just like conventional advertising for any product or service except for two major differences: firstly, the adverts are targeted at a very fine-grained level, almost to the individual, and secondly the adverts themselves can be refined and personalised by varying the message, the images, the colours used, even the accents used in spoken dialogue, all done automatically. Fine-grained targeting with personalised advertisements is a marketer’s dream and politicians have now caught on to the potential this offers during election campaigns. The result is that elections and referenda now know the effectiveness of advertising on social media, because it can be highly personalised and targeted thus it is effective and worthwhile.
There is nothing actually wrong with the micro-targeting of personalised adverts except when it breaks rules. We now know that Cambridge Analytica helped target people on Facebook with personalised advertisements based on predicting personalities from online behaviour in both the 2016 US Presidential election and the 2016 EU referendum in the UK.
What was wrong in this instance was that the model used to predict personalities was based on data illegally gathered from user profiles of millions of users. That particular loophole has been addressed and in theory it should not happen again. Recent progress in an AI technique known as generative adversarial networks (GANs) shows that we can now generate fake videos, or speech, to a quality that is almost indiscernible from the real thing. Fake videos – known as deepfakes – can impersonate a person’s gestures, movements, voice and intonation and can have the subject saying anything the producer wants. The technology to do this is now publicly available for anyone with modest programming skills to use. Deepfake technology used in political election campaigns has not happened yet but it’s just a matter of time, or perhaps it is already happening but we haven’t discovered it yet.
Yet, just because it is possible to generate a fake video, that doesn’t make it a bad thing. One could imagine multiple deepfake videos being generated to deliver multiple variations of a message, tweaked and tailored in personalised ways, just like multiple variations of conventional social media messages are generated. Equally one could imagine deepfake videos of political opponents being generated and used in negative social media campaigning. This is what makes monitoring social media spending in political elections so important, covering how many adverts, who is paying for them, what the adverts contain and who they are targeted at, at both the individual and aggregated levels.
Some social media companies have started to publicly declare advertising spend in political campaigns. Since March 2019 Facebook has a publicly accessible and searchable report on all active advertisements, who is placing them and how much they are spending on them. This report describes that service and similar offerings from Google and Twitter.
While this is welcome it does not go far enough because of the huge volume of adverts, both in number and in number of variations. For example, we know that as of June 2019 the “Trump Make America Great Again” Committee, one of US President Trump’s re-election agencies, spent over $1M per week on Facebook alone with 129,740 different adverts, and that was before his re-election campaign was officially launched. In the UK, the Conservative Party launched 554 versions of the same advert on Facebook welcoming Boris Johnson as the new Prime Minister in the week after his election. The sheer number of advert variations on Facebook alone is overwhelming and the present configuration of access to those adverts, updated weekly and in ways described earlier in this report, is inadequate in order to allow anyone to get to grips with it and monitor the whole advertising landscape in a meaningful way. Thus, it is left to investigative journalists or concerned citizens to monitor individual adverts by digging in and trawling through them.
Trying to monitor, for example, the 129,740 unique adverts Donald Trump’s re-election campaign has used up to the end of June 2019 is thus impossible at the moment. The way to use the Facebook active adverts report is to query or download it to find individual advert material which might be offensive, and then report it. However, by the time we find such adverts they are up to a week out of date and we are searching through individual adverts. The scale of the advertising must be matched with an active access resource that is more frequently updated, possibly in real time, and allows access at aggregated as well as individual levels.
The case for real time updating is made by simply pointing at advertising using conventional media. When a political advert appears on radio or TV or on billboards or the sides of buses, we see and hear it in real time, so why not likewise with social media advertising?
The case for accessing aggregated advert data is more challenging but just as important. For more realistic monitoring of adverts in political campaigns we need to use data mining and pattern detection so that the monitoring isn’t just about each individual advert to each individual viewer, which might or might not be offensive, but also addresses patterns of adverts across patterns of users. This way we have a better chance of detecting deepfake videos when they are used in negative social media campaigns or even worse, when they are used to impersonate political opponents.
The challenges here include issues to do with competitors and competition. Appropriate aggregation of advert data which preserves anonymisation and competitor advantage, can be worked out and agreed with the social media platform providers and the “sweet spot” between effective monitoring and keeping company data private, this can be found by agreement.
At present, we have a form of cold war between social media advertisers in political campaigning and those trying to monitor what is being advertised, but the advertisers have all the advantages, all the tools, and all the resources while tools the monitors have are useful for monitoring on only a minor scale.