Most people have encountered AI in their lives before, be it asking your virtual assistant Alexa to turn on your Spotify playlist, or ChatGPT helping you with homework. These innocent activities aren’t the only thing that AI is capable of, however. In late December of 2023, a deepfake video of Pro-Western President Sandu of Moldova was spread around, in which she shows support for a Russian-friendly party. And this is only one recent incident in which deepfakes are used to spread political disinformation; it seems like AI may form a threat in upcoming 2024 elections all over the world.
Before we dive further into the interference of deepfakes into politics, it’s important to know what they exactly are. Deepfakes are forms of media that have been altered or have been completely made-up using AI. While most people envision deepfakes as images and videos, the term also includes audio. In the past, the tools that were used for developing convincing deepfakes were quite difficult to access, but now with AI generators such as the aforementioned ChatGPT plus, but also Midjourney, Synthesia, and ElevenLabs being open to the public, anyone has the ability to create AI generated media. Despite most Ai generators having some sort of policy prohibiting the creation of (political) disinformation, disinformation can still be created by “jailbreaking”, which involves creating input prompts in an attempt to get constrained AI to say or do things it’s not supposed to do. Not only that, using deep learning, the accuracy and lifelikeness of these deepfakes continue to increase. These realistic deepfakes cause disinformation to be more likely to spread, which in turn could have dire consequences in upcoming elections.
The spread of political disinformation using deepfakes can have multiple purposes. Its purpose could be to steer voters in a specific direction, perhaps to a specific party which seemed to be the case during Slovakia’s elections last October, when an AI generated audio, in which party leader Michal Šimečka talked about vote rigging, was posted to Facebook. In the earlier mentioned case of President Sandu, the purpose was to weaken democratic institutions by creating mistrust in society. Another purpose for deepfakes can be to prevent someone from voting, which is what an AI generated voice resembling Joe Biden meant to do in January.
FBI director Christopher Wray has warned about the potential threat of deepfakes during elections, stating that AI makes it easier for foreign forces to influence the vote for their own purposes. Major tech companies signed an accord in February to prevent the spread of AI disinformation with regards to elections.With the threat still looming, measures are being taken to try and prevent the spread of deepfakes as much as possible.
There is, of course, still a chance that some deepfakes are able to squeeze their way through the cracks and make their way to the public, but in that case, despite continuing improvement of AI generation, there are some signs to look out for. Unnatural eye movements, facial expressions, body movement, or colouring can all be a sign that you’re not watching a real video, but a deepfake. And if you feel like audio of someone’s speech lacks natural fluidity, you could very well be listening to something AI generated.
It's clear that AI technology can be used for malicious intents, but it does not always have to be the case. Like I said, you could just be asking Alexa to turn on your Spotify playlist. So while a healthy dose of fear to it – especially in the case of deepfakes – can be important, don’t forget to enjoy some of the technological advances we’ve made.
Literature:
Center for Countering Digital Hate. (2024). Fake Image Factories | How AI image generators threaten election integrity and democracy. Center for Countering Digital Hate. Retrieved from: https://counterhate.com/research/fake-image-factories/
CISA. (2023, 12 September). Contextualizing Deepfake Threats to Organizations. U.S. Department of Defense. https://media.defense.gov/2023/Sep/12/2003298925/-1/-1/0/CSI-DEEPFAKE-THREATS.PDF
Kaur, G. & Cornèr, K.L. (2023, 20 November). What are deepfakes, and how to spot fake audio and video?. Cointelegraph. What are deepfakes, and how to spot fake audio and video? (cointelegraph.com)
Meaker, M. (2023, 3 October). Slovakia’s Election Deepfakes Show AI Is a Danger to Democracy. WIRED. https://www.wired.com/story/slovakias-election-deepfakes-show-ai-is-a-danger-to-democracy/
Necsutu, M. (2023, 29 December). Moldova dismisses deepfake video targeting President Sandu. Balkan Insight. https://balkaninsight.com/2023/12/29/moldova-dismisses-deepfake-video-targeting-president-sandu/
O’Brien, M. & Swenson, A. (2024, 16 February). Tech companies sign accord to combat AI-generated election trickery. AP News. https://apnews.com/article/ai-generated-election-deepfakes-munich-accord-meta-google-microsoft-tiktok-x-c40924ffc68c94fac74fa994c520fc06
Steck, E. & Kaczynski, A. (2024, 22 January). Fake Joe Biden robocall urges New Hampshire voters not to vote in Tuesday’s Democratic primary. CNN Politics. https://edition.cnn.com/2024/01/22/politics/fake-joe-biden-robocall/index.html
Stouffer, C. (2023, 1 November). What are deepfakes? How they work and how to spot them. Norton. https://us.norton.com/blog/emerging-threats/what-are-deepfakes
Swenson, A. & Chan, K. (2024, 14 March). Election disinformation takes a big leap with AI being used to deceive worldwide. AP News. https://apnews.com/article/artificial-intelligence-elections-disinformation-chatgpt-bc283e7426402f0b4baa7df280a4c3fd
Comentários