A world leader. Standing behind a familiar podium on your screen. A face you are intimately familiar with, a face frequently plastered across the media, speaking about domestic and global policy. Their voice rings clear from your speaker – or through your headphones. The video was just posted and already it is flitting across social media platforms, cast into the digital sea to be dispersed and spread by the restless currents, posted and reposted for millions. Their face, their voice – but not their words.
Advancing artificial intelligence technology has made this situation a reality. The use of deep learning within AI allows the creation and generation of audio, images and videos of people that are entirely false. This technology can transfer a person’s face onto another body, clone a person’s voice to say anything, or create fictitious videos from scratch. It can effectively create scenarios that never happened – and the sophistication of this technology makes it almost impossible to detect with the naked eye, and extremely difficult, even with technology, to distinguish deepfakes from reality.
Deepfakes are now being weaponised across countries and areas of society, from revenge pornography (swapping a person’s face onto a porn actor’s body) to war zone footage (creating fictitious events in conflicts to sway opinion). And – perhaps most worryingly – have been utilised in politics and elections.
During the run up to Slovakia’s 2023 parliamentary elections – two days before the vote – an audio recording was posted to Facebook, and appeared to depict Michal Simecka (the leader of the Progressive Slovakia party) and journalist Monika Todova (Dennik N newspaper) debating how to buy voters and rig the upcoming election. The audio recording was released during the ‘moratorium period’ which enforces silence on politicians before votes are cast, and this made disproving the audio difficult. By the time the audio was later revealed to be an example of AI manipulation, the close race was over and SMER had won.
A central issue when it comes to tackling deepfakes – apart from their likeness to reality – is the vessels they are spread by. Social media is a powerful tool that engages with most elements of people’s lives. A huge number of young people cite social media as their primary source of news and information, while media outlets can often pick up events and stories that gather momentum through social media frenzies. Once a deepfake has been placed on social media, in only a matter of minutes it has spread, posted, and reposted, to gain mounting views. Before anyone can react, hundreds have seen and been influenced by the content. When it comes to social media companies, they often lack the incentive to quickly remove this content, which can easily circumvent and find loopholes in their community guidance.
The greatest threat this growing technology poses is to our elections and political systems. Clips from journalists, politicians, public figures can be faked to say hateful, divisive, polarising, criminal or problematic remarks which can subsequently sway the electorate, voter habits and public opinions. This could have the potential to push people from office, influence policy and at more extreme levels, topple governments, or even incite war and state-on-state conflicts. With general elections in the UK and India, and the presidential election in the US all expected this coming year, deepfakes are now a credible threat to their integrity. They could be used to influence policy, diplomacy, and political directions for years to come among some of the world’s most influential governments. Understanding and working against this threat is even more essential with these elections on the horizon, otherwise we could see these countries’ futures manipulated by this content.
Yet in the face of all this, a greater threat looms. The creation of deepfakes begins to mark a turning point where people really cannot trust what they see. Any image, video or audio could be faked or manipulated, leaving the public unable to put their faith in anything. Without the cornerstone of trust and believability, politics falls apart. People are completely disconnected from the political establishment and processes. An inability to trust one’s own ears and eyes could have far reaching consequences, where life and society as we understand it would be materially changed, and largely could not function. This may sound terribly far-fetched and unlikely, but if you pause briefly to imagine a time where you cannot trust anything you see or hear on your phone, tv or any media or screen, you will see the terrifying prospect this presents. We are now at the point where this is really a reality.
Politics sits at the heart of this. As this threat grows, it will be the first area to truly feel its effects. Elections and referendums will be impossible to manage and decisions will be at the whim of competing deepfake strategies employed by different agents. People will follow the dictations of fake leaders and have views shaped by fictitious scenarios, and this is why governments must act to build a legislative and regulatory framework to combat the threat of disinformation and deepfakes. Currently very few laws surround deepfakes. Related laws of copyright infringement, defamation and data protection offer a weak legal constraint but one with no teeth; laws concerned with deepfakes and AI must therefore be reinforced, as well as greater cooperation with social media companies to fight this false content. It is a difficult and unknown landscape that lawmakers find themselves in, and so the only solution evident to combat it would be cooperation on an international scale, with government and private sector, and experts working with engineers and lawmakers.
Deepfakes and disinformation are some of the greatest threats that our society faces, so action must be immediate, and action must be now.