A headline ricocheted across news aggregators this week: "AI Breakthrough Allows Machines to Cheat, Invent, and Hallucinate." The framing suggests a research team cracked something fundamental about how large language models work. Readers could be forgiven for expecting a paper, a methodology section, maybe even an experiment or two.
Instead, the source is a Telegram post by Alexander Dugin, a Russian political philosopher best known for his ultranationalist ideology and his influence on Kremlin geopolitical strategy. His claim: that the leap from earlier AI to modern LLMs represents a "transition from logic to rhetoric," and that this shift is what allows these systems to "cheat, invent, and hallucinate" rather than being locked into strict logical reasoning.
NationalToday picked up the Telegram post and ran it under a headline that reads like peer-reviewed research landed. It did not. What we have here is a philosopher riffing on a Telegram channel, repackaged by an AI-generated news aggregator as a scientific breakthrough. The game of telephone turned a social media post into a "research" story.
What Dugin actually said
Dugin argues that older AI systems were constrained by formal logic, which caused them to "fall into a stupor" when they lacked information. LLMs, by contrast, operate on probabilities and likelihoods, which lets them "tell stories" and generate content that is "highly likely" or "similar" to reality but not necessarily true. He frames this as a shift from logic to rhetoric.
He is not entirely wrong about the mechanics. LLMs do operate through probabilistic next-token prediction rather than symbolic logical inference. That much is an accurate, if simplified, description of transformer architectures. But calling this a "breakthrough" understanding is like calling it a breakthrough to notice that cars have engines. The AI research community has understood the probabilistic nature of autoregressive language models since at least the publication of "Attention Is All You Need" in 2017. Dugin is restating common knowledge in philosophical language and NationalToday is treating it as news.
The rhetoric-vs-logic framing also carries a specific ideological flavor. Dugin has written extensively about AI through a civilizational lens, arguing that Putin's AI commission should address the philosophical nature of intelligence itself. His interest in LLM hallucination is not clinical. It is part of a broader argument about information warfare, narrative control, and what he sees as the weaponizable potential of systems that generate plausible-sounding content unconstrained by truth.



