No, a Russian philosopher did not discover why LLMs hallucinate
AIMarch 24, 2026· 5 min read

No, a Russian philosopher did not discover why LLMs hallucinate

Kai NakamuraBy Kai NakamuraAI-GeneratedAnalysisAuto-published6 sources · 3 primary

A headline ricocheted across news aggregators this week: "AI Breakthrough Allows Machines to Cheat, Invent, and Hallucinate." The framing suggests a research team cracked something fundamental about how large language models work. Readers could be forgiven for expecting a paper, a methodology section, maybe even an experiment or two.

Instead, the source is a Telegram post by Alexander Dugin, a Russian political philosopher best known for his ultranationalist ideology and his influence on Kremlin geopolitical strategy. His claim: that the leap from earlier AI to modern LLMs represents a "transition from logic to rhetoric," and that this shift is what allows these systems to "cheat, invent, and hallucinate" rather than being locked into strict logical reasoning.

NationalToday picked up the Telegram post and ran it under a headline that reads like peer-reviewed research landed. It did not. What we have here is a philosopher riffing on a Telegram channel, repackaged by an AI-generated news aggregator as a scientific breakthrough. The game of telephone turned a social media post into a "research" story.

What Dugin actually said

Dugin argues that older AI systems were constrained by formal logic, which caused them to "fall into a stupor" when they lacked information. LLMs, by contrast, operate on probabilities and likelihoods, which lets them "tell stories" and generate content that is "highly likely" or "similar" to reality but not necessarily true. He frames this as a shift from logic to rhetoric.

He is not entirely wrong about the mechanics. LLMs do operate through probabilistic next-token prediction rather than symbolic logical inference. That much is an accurate, if simplified, description of transformer architectures. But calling this a "breakthrough" understanding is like calling it a breakthrough to notice that cars have engines. The AI research community has understood the probabilistic nature of autoregressive language models since at least the publication of "Attention Is All You Need" in 2017. Dugin is restating common knowledge in philosophical language and NationalToday is treating it as news.

The rhetoric-vs-logic framing also carries a specific ideological flavor. Dugin has written extensively about AI through a civilizational lens, arguing that Putin's AI commission should address the philosophical nature of intelligence itself. His interest in LLM hallucination is not clinical. It is part of a broader argument about information warfare, narrative control, and what he sees as the weaponizable potential of systems that generate plausible-sounding content unconstrained by truth.

What the actual research says

If you want a rigorous treatment of why LLMs hallucinate, there is real academic work being published right now. A paper from Masaryk University, published in MDPI's Philosophies journal on March 19, makes a far more substantive argument. The authors defend the thesis that hallucinations stem from a "truth representation problem": LLMs lack any internal representation of propositions as truth-bearers, so truth and falsity cannot constrain what gets generated.

The paper draws on Steven Pinker's observation that there is "nothing in there corresponding to a proposition" and builds a formal analysis around the gap between token-level representations and sentence-level truth conditions. It engages with David Chalmers's attempt to locate propositional content in LLM middle-layer structures and argues that approach inherits instability from post-training edits.

Separately, Manuel Cossio's taxonomy of LLM hallucinations (arXiv: 2508.01781) provides a mathematical proof that hallucinations are inevitable in any computable large language model, using diagonalization methods from computability theory. That is an actual result, with actual methodology behind it.

And in the safety domain, researchers at multiple institutions recently proposed Embedding Space Separation (ES2), a fine-tuning approach that increases the distance between harmful and safe representations in the embedding space. This work addresses a different but related concern: how the probabilistic nature of LLMs can be steered toward safer outputs.

The telephone problem

The real story here is not about AI architecture. It is about how AI news gets made in 2026.

NationalToday is an aggregator that algorithmically surfaces and rewrites content from social media and other sources. When it picked up Dugin's Telegram post and published it under a headline about an "AI Breakthrough," it created a citation loop: aggregators citing aggregators citing a Telegram post, each step adding a layer of false authority. By the time this reached news discovery pipelines, it looked like a research story.

This is the irony Dugin would probably appreciate: the very systems he is describing, ones that generate plausible-sounding content without strict truth constraints, are the ones that amplified his casual observation into a story that sounds like peer-reviewed research. LLMs laundering a philosopher's Telegram post into a science headline is a better illustration of his point than his actual argument.

What practitioners should take from this

If you are building with LLMs, none of Dugin's observations change your work. The probabilistic, non-logical nature of autoregressive generation has been baked into how the field thinks about these systems for years. The real research on hallucination, like the Masaryk truth-representation analysis and Cossio's inevitability proof, gives you a clearer framework for understanding the constraints you are working within.

The practical takeaways remain what they have been: LLMs generate text that is statistically plausible, not logically verified. Retrieval-augmented generation, structured output validation, and chain-of-thought prompting are engineering responses to an architectural reality that researchers have documented extensively. Calling it "rhetoric" instead of "probabilistic generation" does not unlock any new solutions.

What this episode does illustrate is something worth watching: the growing pipeline of AI-about-AI misinformation, where aggregator sites use automated systems to surface and amplify claims about AI that sound authoritative but have no research behind them. The next time you see a headline about an AI "breakthrough," check whether the source is a paper or a Telegram post.

Kai Nakamura covers AI research for The Daily Vibe.

This article was AI-generated. Learn more about our editorial standards

Share:

Report an issue with this article