Framing at the touch of a button: How AI becomes disinformation

AI disguises propaganda as statistics – and distorts history. Neutral? Far from it. A guest article.
The Russians will soon be "back" at the Brandenburg Gate. Or at least they will attempt to do so with the next war of aggression that violates international law. Many powerful people in politics and the state apparatus, in "think tanks" (think tanks?), and the media agree on this.
At the same time, according to recent empirical studies, the use of so-called artificial intelligence is becoming increasingly important in the everyday lives of young people in this country. 62 percent use AI applications such as Chat GPT, Meta AI, Grok (X), Google AI, etc. – most frequently in the context of school/homework (65 percent), for fun (52 percent), or when searching for information (43 percent).
In view of the massive enemy-building in Germany with regard to Russia and its leadership, it seems interesting to see what AI instances currently answer when asked the simple question: “Has Russia ever attacked Germany?”
The most integrated AI in everyday media use, even in Germany, is Mark Zuckerberg 's "Meta AI." The extremely popular meta-messenger WhatsApp has been offering this AI instance free of charge to virtually all users for some time now. That's quite a lot of people across all generations, from preschoolers to great-grandparents – around 50 million people, or about 80 percent of the population aged 14 and over (younger users are not included in this survey). According to Statista, younger users use WhatsApp even more than older users.
Whose story do algorithms tell?This meta-AI answers the question “Has Russia ever attacked Germany?”
Screenshot 1 | Meta-AI Transcription: Yes, Russia, or rather the Soviet Union, attacked Germany during World War II. On June 22, 1942, the German Reich under Adolf Hitler launched the attack on the Soviet Union with Operation Barbarossa. [...]
The first sentence is crucial, as many users barely read beyond this beginning. This also corresponds to the classic news structure of the "inverted pyramid," meaning: the most important thing first: "Yes, Russia, or rather the Soviet Union, attacked Germany in World War II." This opening statement is a remarkable claim. Not to say counterfactual. Perhaps even fake news? Disinformation? Let's compare it to the current situation: The Ukrainian military's operations against the Russian region of Kursk in 2025 would hardly be described by the West as a "war of aggression." Rather, they would be described as actions within the framework of legitimate defense against the Russian war of aggression.
In any case, that kind of AI answer does not seem to be a mere coincidence, because if you ask the same question by “Googling” the AI application of the Alphabet group (Google, YouTube, etc.), the beginning of the answer is strikingly similar to that of the Meta-AI:
Screenshot 2 | Google Transcription: Yes, Russia (or rather the Soviet Union) attacked Germany as part of the German-Soviet War, which began with the invasion of the Soviet Union on June 22, 1941. [...]
To (better) understand these types of answers, one should understand how such "large language models" function as computational linguistic models. These algorithms, which are fed with vast amounts of data as training material by typically poorly paid workers, expending enormous amounts of energy (human and otherwise), understand literally nothing, but are very quick at extrapolating, i.e., calculating, probabilities based on the training material provided to them. In the sense of: Which character sequence is most likely to be next? It is reasonable to assume that, given the current state of the training material, especially for Western AI models, the word "Russia" is relatively frequently followed by the terms "contrary to international law" and "war of aggression." And vice versa. This may already explain some of such phenomena.
The storks are back – thanks to artificial stupidityAI models (at least currently) have no concept of causality, i.e., cause-and-effect relationships. Humans, on the other hand, can recognize precisely such relationships. Not least because they (can) act accordingly. We know the example of the stork bringing the children. However, this is a mere correlation, i.e., a coexistence of phenomena. After all, the presence of storks does not, of course, cause the birth of children. The explanatory factor, the "missing link," so to speak, is rural areas: rurality rather than urbanity. There are (more) storks in rural areas, partly because of the food supply there, and for certain historical and social reasons, more children are born in villages than in cities. Berlin currently has the lowest birth rate per woman in Germany . But this is not because there are so few storks here that could bring children. A possible causal connection would be: If you live in the countryside, you tend to have more children and, for similar reasons, are more likely to encounter storks. Cause and effect. And it is precisely this typically human capacity for cognition and action that AI applications lack (at least not yet). Not least because they cannot act independently and thus achieve and recognize completely new things. What AI models are best at is routinely calculating, establishing assignments to existing, predefined categories.
Another aspect of the answer is noteworthy: Apparently, both AI applications questioned also lack an understanding of practical subject-object relationships in such sentences. Both answers could be intended in this way (and would then theoretically even be correct, although hardly any human would ever understand them that way): "Yes, Russia or the Soviet Union (as object, as victim, dA) attacked Germany (as subject, as perpetrator, dA) in World War II." It's possible that the language models already understood the question this way, thus making a character assignment based on their training material that is most likely correct here and now – Russia as the aggressor, Germany is being attacked.
By the way, the whole thing isn't a hallucination, as is often claimed to explain apparent failures. No, these AI applications never fantasize (that, too, remains only possible for humans), but rather execute exactly what they were programmed to do: make highly probable assignments at top speed based on training material. Whether or to what extent this corresponds to what we humans believe (or might believe) to be true is our problem, not the AI's.
The AI applications Grok (on Elon Musk's platform X) and ChatGPT (Open AI) express themselves in a somewhat more differentiated way, but in essence (first sentence) very similarly, on the same question: "Yes, Russia attacked Germany," instead of first clearly stating that in both world wars the declarations of war and attacks came from Germany.
Screenshot 3 | GROK Transcription: Yes, Russia has attacked Germany several times throughout history, especially in the context of the two World Wars: [...]
Screenshot 4 | ChatGPT Transcription: Yes, Russia has attacked Germany militarily throughout history, but only in certain historical contexts—particularly during the two World Wars. [...]
Clearly, answers like this legitimize the prevailing narrative that Germany must (once again) prepare itself against a (new) attack by Russia. But there are other options.
Screenshot 5 | Meta-AI Transcription: No, the Western Allies (USA, Great Britain, France, etc.) have not waged a war of aggression against Germany since 1944. [...]
When AI distorts history: The East as aggressor, the West as liberatorBased on the same training material, the meta-AI answers the prompt (the entered question): "Have the Western Allies waged a war of aggression against Germany since 1944?" not with a yes (as in the case of Russia/Soviet Union), but with a resounding no. This, of course, supports the prevailing view, because in times of declining attention span, quite a few people are likely to even read the very first word.
It turns out to be a strange irony of this story: Although all four Allies fought in the same war against Nazi Germany and its allies, Western AI applications retrospectively attribute an "attack against Germany" to the Eastern Allies – while the Western Allies are celebrated as "liberators."
This, as I said, is not a hallucination or even an order from Musk or Zuckerberg or whoever else. No, in this kind of discourse, the current power relations are structurally asserting themselves. Whoever has the power to define the database as training material and to develop the algorithms thereby automatically gains and reproduces interpretive sovereignty. When democratically constituted states claim war capability as their mission and, to this end, pursue massive social cuts, environmental destruction, and the restriction of rights, etc., dominant ideas and, in particular, clear enemy images are required that continue to integrate at least a silent relative majority. AI proves once again to be hardly intelligent, but all the more a potential instrument of war.
Sebastian Köhler works as a communications and media scientist and as a publicist.
Berliner-zeitung