AI: The Illusion of Intelligence Fueling a Bubble and the Limits We Refuse to See

Artificial intelligence is nothing like a futuristic movie artifact, even if today’s media narrative sometimes gives that impression. Behind the word “intelligence,” there is no spark of consciousness, no intention, only the fruitful convergence of two very concrete disciplines: mathematics, particularly statistics, and computer science. Since Alan Turing, AI has been about automating data-processing operations that humans perform more slowly and with more fatigue. Autopilots, search engines, or recommendation systems have never been anything other than sophisticated calculations executed at high frequency. It is, above all, computing combined with mathematical techniques.

If this long-standing field has suddenly burst into the public debate, it is thanks to a type of model that has shaken the collective imagination: Large Language Models (LLMs). With ChatGPT and its derivatives, for the first time, systems seem to handle language with almost human ease. The shock effect was immediate. LLMs have crossed sectoral boundaries, penetrated businesses, schools, newsrooms, and profoundly transformed millions of people’s relationship with technology. Such widespread adoption would have been unimaginable just five years ago.

To understand what is really happening, we need to look at how these models work. An LLM does not think, reason, or understand anything. It calculates the most probable sequence of words based on billions of texts it has been shown. Its apparent intelligence is nothing more than a statistical illusion. It manipulates neither ideas nor concepts, only correlations and semantic distances (technically: by transforming words into tokens on which algebraic operations are performed in a multidimensional vector space). Faced with a tricky question, it can state a truth or invent a false fact with equal confidence, not out of cunning but by computational nature. What we call “hallucinations” is not a flaw but a direct consequence of this architecture. Gary Marcus puts it bluntly: LLMs do not know; they guess.

This reality explains the caution, even concern, of many leading researchers. Yann LeCun stresses that these models cannot reason. Yoshua Bengio highlights their inability to grasp causality. Fei-Fei Li reminds us that no real intelligence can emerge from a system that has never physically interacted with the world. Even Geoffrey Hinton, a founding figure of deep learning, now acknowledges the profound weaknesses of the current paradigm.

These scientific limits now intersect with economic ones. To maintain the illusion of uninterrupted ascent, tech giants have invested colossal sums. Over $250 billion in CAPEX has been committed in two years to build data centers, acquire GPUs, and deploy the infrastructure needed to train models. Yet these investments are not based on proven profitability. The return on capital for major tech platforms has fallen from 17% in 2021 to 11% in 2025, while actual GPU utilization rates do not exceed 65%, revealing structural overcapacity.

Available economic analyses add even deeper uncertainty. Estimates of AI’s real impact on the economy diverge at an unprecedented level. Daron Acemoglu estimates that only 5% of tasks would be truly affected, while the International Labour Organization suggests 41%.

The central question, who captures the profits, is also unresolved. Markets bet on gains concentrating in the hands of model producers. But this assumption is credible only if models continue to improve in performance, which scientific reality contradicts. Betting on future profitability that assumes emergent intelligence is like building a house of cards on an extremely fragile technological gamble.

The paradox is clear. Even as LLMs dominate the collective imagination, they represent a technical dead end for achieving general intelligence. All experts deeply familiar with these architectures agree on a point rarely echoed in the media: LLMs are already plateauing. They will remain useful, sometimes indispensable, for specific tasks such as summarization, assistance, or drafting. But they will never surpass a certain threshold of reliability, logic, or understanding. Their purely statistical structure condemns them to imitate thought rather than produce it.

The future of AI will not be decided solely by scaling up current LLMs. Other paths exist. Hybrid neurosymbolic models aim to combine neural networks and explicit logic to enable genuine reasoning. Causal AI, championed by Judea Pearl, seeks to understand cause-and-effect relationships rather than mere correlation. So-called World Models, advocated by LeCun, aspire to give systems a representation of the world that goes beyond language. Fei-Fei Li continues to argue for embodied AI, capable of learning through real-world interaction. Finally, some researchers are exploring more modest but specialized models, more explainable and more efficient, breaking away from the race toward gigantism.

This is not about throwing LLMs into oblivion. Their contribution is immense. They have opened new possibilities, democratized access to AI, accelerated knowledge production, and transformed entire professions. But it would be dangerous to confuse spectacular success with a sustainable scientific trajectory. The anticipation bubble swelling around LLMs distracts attention from the conceptual dead ends of the model and channels investments toward an architecture that can never deliver the general intelligence some promise.

The history of technology does not advance through infinite amplification of the same idea, but through bifurcation. It is time to prepare the next ones. The true revolution in AI will come less from ever-larger models than from genuinely smarter ones. And intelligence, unlike performance, is not measured in billions of parameters or kilotons of GPUs, but in the ability to understand, reason, and interact with the world.

Commentaires

Posts les plus consultés de ce blog

La fin du cycle libéral

L'Histoire jugera : Gaza rasée, l’Occident et les états arabes démasqués

Les ombres sur l’horizon de l'Humanité : méditations vertigineuses