In the super election year of 2024 almost half of the world´s population will be called upon to vote. But the development of artificial intelligence has created new challenges for the integrity of these elections. Particularly worrying is the manipulative potential of deep fakes, meaning AI generated or manipulated image, audio or video content, that resembles existing persons, objects or places and would appear to a person to be authentic and truthful (definition from EU AI Act). The manipulative potential of deep fakes has been observed in a wide range of cases already, for example in the presidential elections of Slovakia, where an audio deep fake of one of the candidates discussing election fixing surfaced, or in runup to the US elections, in which robocalls impersonating Joe Biden discouraged people from voting. In Turkey the destructive potential of deep fakes became evident, when a candidate in the presidential election decided to withdraw after deep fake pornographic material of her emerged. Cases as these illustrate, that the question of undermining democratic elections through deep fakes is no longer an if question – but a when, how and to what extent. Measuring the scale of the phenomenon and quantifying its impact remains difficult, and will require to build a methodological framework, that takes into account the specific form of deep fake material used.
While cases as the Turkish election demonstrate the direct political consequences a deep fake may have, a more systematic danger is connected to their cumulative psychological and social effects. The mere fact alone, that deep fake technology exists is enough to erode trust in any given piece of information, leading to a blurring of lines between real and fake. Thus, the fear and mistrust that deep fakes might instil in a population can have powerful, but difficult-to-measure effects on political processes and communication. In short: If everything could be a deep fake, then there is always room for doubt. A recent example is the distortion of online debate, after some commentators with considerable reach claimed that an image published by the Israeli government showing dead children in the aftermath of the Hamas attack was an AI fabrication.
In Germany, the cases of political deep-fakes are increasing as well, the most prominent case being a deep fake video published by left wing activists, that showed Chancelor Scholz calling for a ban of the AfD. Generally, politicians and groups connected to the far-right party seem to be the most eager to employ deep fakes for their purposes, for example by sharing deep-fake audios to discredit the most popular German TV news.
In order to effectively combat the harmful, cumulative and systematic effects of deep fakes, a number of measures should be taken.