Artificial Intelligence: a User’s Guide
Robust Applications and Structural Limitations
Futuristic Robot Reading a Book. IA generated image. Source: Raw Pixel.
This essay is the fourth in our series of six. Artificial intelligence is often fantasized about. We felt it was necessary here to assess what it can and cannot do, but also what it will probably never be able to accomplish. One of the first exaggerated representations of AI is what is sometimes called “Artificial general intelligence.”
This series of articles was written with Constantin Vaillant Tenzer, a researcher in mathematics applied to cognitive neuroscience and machine learning at the École normale supérieure Ulm (Paris Sciences et Lettres University). For more than four years, he has been working on improving methods for training artificial intelligence algorithms, in collaboration with French and American companies.
Artificial general intelligence: a vague concept, a major political issue
Artificial general intelligence (AGI), often presented as the ultimate technological frontier, remains a remarkably vague concept. According to the historical definition proposed by OpenAI in 2023, AGI refers to “a highly autonomous system that surpasses humans in most economically valuable tasks.” This formulation, which focuses more on economic value than on cognitive abilities per se, was supplemented by an informal contractual threshold: OpenAI would consider AGI to have been achieved when its systems generated more than $100 billion in profits for its early investors. Some others, like Mark Gubrud, who claims to have invented the term in 1997, and Jensen Huang think that “we have achieved AGI”, since, according to them, current LLMs can do more and better than what most humans can do.
However, as early as August 2025, the CEO of OpenAI, Sam Altman, described the term AGI as “not super useful,” publicly acknowledging its imprecise nature. This ambiguity is not insignificant: it reflects the absence of universal metrics, scientific consensus, or even a shared operational definition. Different companies propose divergent scales: OpenAI distinguishes five levels of increasing autonomy, while Google DeepMind favors a taxonomy by application domain. This creates a rhetorical horizon that serves both fundraising and collective anxieties, without anyone knowing exactly what they are talking about. Interestingly, Yann Le Cun and his colleagues prefer to speak of “Super Human Intelligence,” for which they attempt to provide a rigorous definition in a research paper.
This vagueness becomes particularly problematic in a tense geostrategic context. In the United States, the Trump administration has placed AI and robotics at the heart of its industrial and military strategy, with massive investments in autonomous systems and humanoid robots for defense applications.
At the same time, China is accelerating massively in this field, backed by a major structural advantage: access to massive amounts of data on its population of 1.4 billion, combined with privacy laws that are much less restrictive than those in Western democracies. Beijing is exploiting this data to train predictive surveillance systems: identification of potential protests, emotional analysis of prisoners, and widespread social scoring. The country now has around 600 million AI-equipped cameras, or roughly one camera for every two citizens. The rise of Chinese power is a reminder of the risks of industrial espionage: documents revealed in 2019 showed that Chinese agents had stolen source codes and pricing strategies from ASML, the Dutch lithography machine giant, via infiltrated employees linked to the Chinese Ministry of Science and Technology. More recently, a US Congressional report from October 2025 revealed that ASML sold 70% of its advanced DUV lithography systems to China in 2024, compared to only 26% in 2022, a nearly threefold increase that fuels Western concerns about embargo circumvention. Nevertheless, ASML only sells machines capable of engraving with 12nm precision to China, while their best models can achieve 3nm precision (i.e., the size of 30 atoms).
In this context, European democracies risk finding themselves caught between American technological dependence, the threat of industrial decline, and illusions of strategic autonomy. The race for AGI is therefore not just a technical competition: it is a battle for control of critical infrastructure, data, and, ultimately, digital sovereignty in the 21st century.
Far beyond the promises or fears associated with AGI, the artificial intelligence models of 2026 already have remarkable capabilities that are profoundly transforming intellectual work and service production. These systems excel in four main areas where they regularly outperform individual humans, while significantly amplifying



