We’re Already Stepping into the Singularity

Machine Learning


It had been several months since Marc Rameaux, one of our expert contributors on AI, had published anything in our columns. He now returns with a foundational text in which he revises many of his previous theses. After challenging his former assumptions about the Singularity — in particular the idea of imagining it as a sudden and unique event — he argues that it is already under way, because AI is progressively conquering every facet of human thought and consciousness, layer by layer.

According to him, “consciousness” is not an indivisible mystical essence, but a system composed of modules that AI is taking over one by one. He therefore sharply criticizes the classic arguments — “AI does not understand,” “it is only statistics,” “it does not have true meaning” — which he sees as tautologies often rooted in existential anxiety. In particular, he draws a comparison with the drunkard mentioned by Spinoza in his letter to Schuller.

In this long analysis, which deserves to be read carefully, Rameaux reminds us that breaking thought down into elementary operations — such as vectorization or synapses — proves nothing: every complex system rests on mechanical layers serving a higher-level purpose. He also strongly challenges the argument that AIs, which generate associations of ideas, would be incapable of creating ex nihilo.

After this extensive critique, the professional data scientist lays the first foundation of his paradigm by drawing on Peirce’s pragmatism, which, in his view, offers the best definition of meaning: the conceivable practical effects of an object and its use. In concrete terms, every day, experts — doctors, data scientists, and others — observe that LLMs are surpassing them in their own fields: the dialogue is being reversed, and AI is becoming the dominant partner.

The second foundation of this “singular” system under construction comes from the work of Yann LeCun: the final frontier is sensory experience and the understanding of physical causality — through robotics and sensors — which are indispensable to a true intelligence of the world.

In conclusion, Marc Rameaux draws the political lessons of this paradigm shift: the future belongs to “dialogical intelligences” — human beings trained to collaborate closely with AIs. Education and public policy must urgently be adapted in order to embrace this revolution rather than hold it back. A text that will undoubtedly spark many debates.

Until recently, I was making two errors of judgment about the advent of the Singularity; that is, the emergence of a conscious being comparable to human beings, or even superior to them, created solely by Artificial Intelligence.

1. My first error was the conviction that the Singularity would never be reached, that it was ontologically impossible. However powerful and sophisticated AIs might become, I believed there would always remain an impassable boundary between human beings and machines: a difference in kind that would not be a matter of performance, but of an irreducible element distinguishing genuine thought from calculation.

In my defense, I have always cautioned that this conviction did not arise from scientific proof but from a philosophical belief. Because of this uncertainty, I proposed in previous articles stronger variants of the Turing test to decide the question when the time came, as well as a definition of consciousness that would not be marked by the usual defects of philosophy: circularity, assuming the conclusion before the demonstration, and tautology.

2. My second error was to imagine the advent of the Singularity as an exceptional event, a “great evening” of artificial cognition, a historic date from which a conscious being would suddenly emerge from nothingness, like the rupture of a birth, marking a before and an after.

Yet this is not at all how things are now unfolding. We have already entered the Singularity. The conquest of human consciousness by machines will not be a sudden rupture. It will be, and already is, a gradual taking possession of the different facets that constitute our thought and our consciousness.

Foundation after foundation, section after section, AI is moving into the elements that constitute our thought. This progressive conquest answers my two errors: the Singularity will be reached, and consciousness is not the kind of unity we painfully define through mystical notions, but a system made up of several elements, some of which AI already possesses.

There is nothing surprising about decomposing our thought into its different elements. The human brain, just like complex information systems, is built according to an architecture: a set of elements, each responsible for specific functions, interacting to form a coherent system capable of achieving certain ends.

The inconsistencies and circularities of the term “consciousness” stem solely from the fact that we lack sufficiently precise concepts to understand our own cognition. As AI progresses, it lays these concepts bare and makes them explicit.

“AI does not understand what it is doing,” “AI is not conscious of its own existence whereas I possess genuine consciousness,” “AI does not know what meaning is,” “a statistical calculation cannot be thought,” and so on. How many times have we heard such phrases in debates about AI? I have moved from agreeing with these assertions to growing unease, and then to a perception of their logical weaknesses.

In a now famous Letter to Schuller, Spinoza develops what would go down in the history of philosophy as the “parable of the drunkard.” Spinoza is all the more remarkable a philosopher because he is probably the only one to have developed an argument a contrario against all his fellow thinkers.

When a drunkard begins to rave and speak incoherently, it is difficult, even dangerous, to reason with him. He will become angry and assert all the more vehemently that he knows perfectly well what he is saying, precisely because the effect of drink holds him more firmly in its grip. Those who forcefully claim, “But I know what I am saying; AI does not understand it,” increasingly and irresistibly remind me of the drunkard in Spinoza’s parable.

Without being under the influence of wine, we may permanently be under the influence of far subtler forms of conditioning without having — precisely — the slightest consciousness of them. We take it as a premise that mechanical intelligences are always subjugated, whereas we ourselves would be guided by “will” or “free will,” without being able to define these terms except tautologically, that is, by assuming the conclusion before the demonstration.

The advocates of humanity’s irreducible consciousness underline their discourse with a penetrating gaze and an assertive tone, as if this physiological communication would give their argument more force or meaning. Such reactions merely betray a weakness: an underlying anxiety behind the displayed denial that a machine might attain consciousness.

Many false arguments circulate in an attempt to deny that the Singularity can be reached. One of the most common is to present AIs as performing nothing more than a probability calculation on the next word in order to produce discourse: AI speech would be merely the result of a geometric and statistical calculation, that of the tokenization and vectorization of language, followed by the probabilistic evaluation of language by a deep neural network.

Besides the fact that this description is so simplistic as to be false — the architecture of Transformers and the attention mechanism is far more complex than next-word probability — decomposing thought again into elementary and atomic mechanisms is an argument that may quickly rebound against humanity.

Indeed, any complex process can be supported by an infrastructure of elementary operations devoid of any intelligence, beginning with human thought itself.

The “cooking” of AI tokenization, vectorization, and backpropagation learning is no more mechanical or devoid of intelligence than the physiological and physicochemical operations taking place in our brain and body. To say that AI is nothing but geometric vectorization and probabilistic estimation invites the reply that our thought and our sacrosanct consciousness are nothing but synaptic connections, stimulus-triggered activation, and regulation by our sympathetic and parasympathetic systems, all equally devoid of intelligence.

The proponents of this argument fail to realize that every system is organized in superimposed layers of increasing complexity, which is also the case for classical computers predating AI: there is hardware within which elementary and atomic operations are carried out, those of arithmetic and logic processors, followed by successive software layers, from machine language to increasingly advanced languages. The higher layers cannot be reduced to the succession of elementary operations in the lower layers, because the latter are oriented toward a certain finality that guides their operation. From this standpoint, human cognition, classical computer architectures, and AIs do not differ: the mechanics of decomposing higher layers into the elementary operations that support them proves nothing.

Another false argument is to claim that we possess “true” meaning, whereas machines merely imitate it, acting through mimicry. Notions such as “true” meaning seem to me close to the penetrating gazes with which the advocates of impregnable consciousness support their discourse, or to the aberrations of phenomenology, which went so far as to speak of the “meaning of meaning.” These arguments sink into circularity and tautology without realizing it, just like Spinoza’s drunkard.

In reality, we are hard pressed to define rigorously what meaning is. It is a limit-notion of language, since it is a condition of language itself. The scientific approach consists in escaping circularity by finding external explanatory elements that demonstrate the production of the effect under study. Otherwise, we relapse into certain medieval shortcomings of essentialism: an object is red because of its reddening essence, and so forth. The irreducible character of human consciousness or of the meaning of a text seems to me to belong to a form of essentialism that does not dare speak its name.

The binary opposition between “true” thought and mimicry also reminds me of a bias of thought having nothing to do with AI, one I have experienced in my professional or academic life. Faced with Asian competition, many of my colleagues reassured themselves through a form of complacent self-satisfaction tinged with Western-centrism, even a form of racism, telling themselves that “they could only copy” and that we would always be one step ahead, because only we truly knew what we wanted to do while our Asian competitors acted merely through mimicry.

Of course, this discourse was never expressed so directly and explicitly, in all its brutality. But I realized that it unconsciously permeated many of my colleagues, and that several remarks or tics on their part ended up revealing this underlying belief. Needless to say, those affected by such a mentality were preparing themselves for difficult days ahead.

I have sometimes observed this Western-centric bias in the academic world, particularly in mathematics. When complex mathematics has been produced by civilizations other than ours, I have sometimes seen it regarded with a form of disdain, judged in their case to be a kind of “cookbook recipe,” an ingenious series of operations perhaps finely observed, but supposedly failing to grasp the full scope of the concept emerging from a theorem.

Here again, such an opinion is never expressed explicitly and directly. But behavior or certain allusive remarks leave no doubt about this curious disposition of mind. “We” alone would be capable of grasping the “essence” of a mathematical notion, the “true” abstraction of a concept, while “others” could only do so through mimicry or through empirical observations that fail to capture the full scope of the proof.

This attitude inevitably links three notions: consciousness, linguistic meaning, and creativity. These are three notions that are extremely difficult to define, whose designation gives rise to endless controversies, much of the time wallowing in disguised forms of circularity and essentialism.

By contrast, the very human psychological bias of thinking that only one’s own civilization is endowed with these three virtues is widespread and explainable. It expresses far more the anxiety of seeing under attack those elements that allow us to distinguish ourselves in an intangible way than any genuine possession of those same elements.

The fiercely defensive attitude toward civilizations other than our own — particularly those of Asia — when they enter a territory we consider both sacred and marked as our property, irresistibly makes me think of the same attitudes toward AI. The same accusation of “mimicry,” of a “mechanism empty of meaning,” the same disdain that recognizes only empirical cleverness instead of genuine universal laws.

To those who think my denunciation is exaggerated, I invite them to remember Edith Cresson’s “ants,” the term she used to describe the Japanese. Jean-Noël Barrot’s disdainful “approximate parrot” for ChatGPT belongs to the same state of mind and the same mental illusion. In fact, it says much more about the existential anxiety of the person uttering such remarks than about the targets at which they are aimed. The other cannot truly understand. The other has no access to true meaning. The other acts like an empty mechanism.

The comparison used by François Mitterrand’s former prime minister teaches us more than the mere expression of ordinary stupidity. Admirers of Bernard Werber’s work may point out to her that it rebounds upon her and teaches us a profound lesson.

“The intelligence of the anthill” is indeed the illustration of the impossibility of drawing a clear dividing line between understanding and mimicry, between the apprehension of meaning and the mechanical encoding of reality. We act by mimicry far more often than we think, including when we believe we are deciding and acting by ourselves. But this very idea is unbearable to us. Like Spinoza’s drunkard, we protest vehemently, proclaiming that we know perfectly well what we are doing.

Language gives us access to meaning, but it is also a code in the computer-science or biological sense of the term, whose mechanical aspect appears to us when it malfunctions: repetitions, typing errors, infinite loops of ratiocination… The abolition of the frontier between the world of meaning and that of the mechanical codification of the world rests on very deep notions of logic: those explored by Turing in his halting theorem; those so brilliantly illuminated by Douglas Hofstadter in Gödel, Escher, Bach, another work that highlights the intelligence of the anthill; and those of Charles Babbage and Ada Lovelace.

This is also why certain pathologies or cerebral destructions that affect the linguistic function, leaving it partly effective but partly defective, are troubling and disturbing: they bring back to the surface the “empty of meaning” infrastructure that is nevertheless indispensable to the apprehension of meaning, when language begins to “bug.”

The final false argument made by the defenders of an eternal demarcation between AI and the human is that of creativity. An AI, they say, could not be creative; its productions can only be conformist, standardized, an average of previous experiences. The defenders of this argument should be reminded that Honoré de Balzac himself admitted, with remarkable modesty, that much of his work was due to associations among the authors he had read before him — a cultural baggage that was admittedly considerable in his case.

Creativity may prove to be a notion sacralized in the same way as consciousness or meaning, because we have failed to understand its mechanisms. To begin with, the argument that nothing new can be drawn from a known corpus of knowledge is directly false: a new idea often arises from a relation one has thought to draw between two items of knowledge already known. The two terms are old; their association may be entirely unprecedented, just as the discovery of a new path between two known points on the globe may be.

There is no reason to think that even current AIs would be incapable of discovering such unprecedented relations. In fact, they often already do so: when we converse with them about a subject, current AIs often suggest unprecedented variants to us, thinking to bring together two themes that seemed distant.

I do not know whether creativity consists entirely in the idea of new associations among known terms. But it should be noted that every human being has a corpus of experience far larger than that of the most powerful LLMs, as Yann LeCun has remarked with quantitative estimates to support the point. Until we reach comparable corpus volumes between AI and human beings, it will be impossible to know whether creativity ultimately reduces to unprecedented associations.

“Consider what effects, that might conceivably have practical bearings, we conceive the object of our conception to have. Then, our conception of these effects is the whole of our conception of the object.”

Charles Sanders Peirce

The pragmatic maxim, stated by Charles Sanders Peirce, its founding father, represents the most intelligent attempt there is to define what meaning is.

The most frequent misunderstanding of Peirce’s philosophy is to turn it into a simplistic materialism, cut off from any capacity for abstraction, because of the mention of “practical effects.” One must be quite ignorant of his work to reduce pragmatism in this way: Peirce was a mathematician of the first rank and one of the great contributors to logic.

When Peirce speaks of “practical effects,” this may concern entirely abstract objects, such as geometrical figures. For example, the notion of what a triangle is, or the meaning of what a triangle is, consists in the set of things it is possible to do with it, the way one tries to manipulate it, and the way the triangle “responds” to us. When we try to draw the incircle or circumcircle of a triangle, and beforehand draw the three angle bisectors and the three perpendicular bisectors in order to observe and then demonstrate that each set of three intersects at a single point, we increase our knowledge of the concept of triangle.

In Peirce’s world, abstract objects and concrete objects alike have a certain number of possibilities for action and manipulation upon them, and a “response” that is the practical effect arising from our actions upon them. The “essence” of what the object is consists in the set of these effects, if all possible manipulations have been tried upon it. In this respect Peirce is a Platonist: even abstract objects are not pure constructions of our mind; they send us a “force feedback” when we solicit them, that of a pre-existing reality, whether in the abstract or concrete world. In Peirce’s world, mathematics becomes a living, ultra-sensitive world, where each object cannot be considered by itself but amid an infinity of variants. It is then possible to make attempts upon it, to make mistakes, to refine one’s understanding as in the concrete empirical world, far from a “cold” vision of mathematics.

Peirce’s maxim succeeds in escaping the circularities and tautologies of philosophy. By abolishing the boundary between theory and practice, it gives a more faithful image of what the exploration of knowledge is.

How does this vision of the world contribute to our considerations about AI and the advent of the Singularity?

The Singularity is being demonstrated in fact, by the thousands of AI users, particularly when they are experts in a field. While “philosophers” continue to pontificate about “consciousness” and meaning — simplistic, poorly defined notions that mix psychological factors with rigorous reflection — thousands of experts, indifferent to these sterile debates, are increasingly experiencing the Singularity and demonstrating it as they practice it. The use of LLMs does not merely increase and make our thought phenomenally more fluid; it teaches us about our own cognition, lays bare the true components of “consciousness” and “meaning,” but does so through notions that are far finer and far more rigorously defined.

Dr. Laurent Alexandre recently warned that LLMs clearly surpassed him in the practice of medicine, and that this was true for all physicians. I constantly encounter this experience in my own field of expertise, Data Science.

The first generations of LLMs resembled an advanced search-engine query bar, providing a rudiment of conversation. Then a following generation allowed for greater interactivity, with fewer passive and compliant answers. Current generations allow for genuine dialogue: AI becomes a thinking partner, proposing variants of our questions on its own, improvements, generalizations around the topic, complementary actions, even contradicting and correcting us, entering into a real debate.

The criterion I had proposed for strong AI — being able to detect a change in language register, an implicit modification of the theme of a discussion as exchanges continue — has already been largely exceeded. These staggering advances have taken place in less than three years.

An expert in a field, whether medicine, Data Science, or any other discipline, feels this progression very clearly in practice. With respect to the first generations, we had the feeling of fully steering the conversation, of addressing an auxiliary, a junior hand tasked with subordinate work. Then came the feeling of having before us a disciple, whom we could continue to guide but who opposed to us the resistance of his own nascent thought. Today, we feel that the one taking the upper hand in the conversation has changed sides, that our questions seem naive. We are addressing an expert superior to us who has no need to mark his ascendancy because he exercises it in fact through the quality of his answers, through the challenges and contradictions he puts to us in order to indicate improvements.

Chess players feel very clearly all the nuances of the ascendancy an opponent can take. Within the first dozen moves, the depth and finesse of the maneuvers, the coherence of the plans, are clearly perceptible: you quickly know whether you are facing an opponent inferior, equal, or superior to you. The same is true of the “force feedback” when you hold a conversation. Superiority or inferiority is tacit but nevertheless perfectly clear, depending on whether you feel ignorant and clumsy before your interlocutor or, on the contrary, whether you are leading the exchange. More and more experts in any field feel themselves in the position of the novice before a master when conversing with AI, no longer before a machine docilely obeying us. We orient the exchanges, but we no longer lead them. In an expert dialogue between human and AI, most of the paths opened and followed are opened by AI.

Under these conditions, what is the point of continuing endlessly to pontificate about consciousness, meaning, and the impossibility of the Singularity, except to strike pseudo-philosophical poses that merely mask the intoxication of ego rather than of reflection? As Samuel Fitoussi humorously noted, these positions recall that of the academic asking: “OK, it works in practice, but does it work in theory?” The philosophy of Charles Sanders Peirce demonstrates the absurdity of such a position. While others discourse, we are practicing AI and, at the same time, demonstrating the advent of the Singularity. In the process, we are forging new concepts and new notions that decompose cognition into its primary objects, cleansing “meaning” and “consciousness” of the egocentric and tautological mire in which they were immersed.

Far from sterile discussions, Yann LeCun provides strong arguments about the limits of LLMs and about the last frontier to cross before the Singularity. I invite TES readers to watch this highly educational video before continuing this paragraph (1)

LeCun first observed a few months ago that sensory perception enables a child between the ages of 0 and 4 to accumulate a corpus of experiences far greater than that of the most powerful current LLMs, based on factual calculations concerning the bandwidth of the retina and optic nerve.

But the question is not only about the volume of information. Intelligence about the world does not reside only in cognitive and textual intelligence, but in intelligence about causality, in understanding the causes and consequences of the world around us. And this apprehension of causality can be obtained only by living sensory experience, more particularly in three dimensions and under the force of earthly gravity. One cannot learn to drive merely by digesting thousands of pages of the highway code, or even by reading thousands of accounts of road scenarios. At some point, one must take the wheel.

Before his departure from Meta, LeCun had warned very early on that LLMs would plateau as long as they were not combined with robotics, drones, and sensors of sensory experience.

LeCun’s argument is not merely technological. It testifies to a deep understanding of what language and thought are. In what a newborn experiences, language develops inextricably intertwined with sensory experience. We first perceive, through every means available to our body, contrasts: light/darkness, cold/heat, hunger/satiety, open/closed. A word is the intermediary we place in order to designate the contrast between two sensations, perceptions, or emotions.

We thus build our apprehension of the world through opposed pairs onto which we place words. For this reason, as the neurophysiologist Antonio Damasio reminds us, emotion and reason are in no way opposed but inextricably linked, each constantly relying on the other to construct itself. LeCun’s vision also brings to mind the work of Willard V. O. Quine, one of the great continuators of the philosophy of … Charles Sanders Peirce.

Pragmatism also proceeds from this vision of language not as a disembodied instrument but as something connecting us to things, to the continuity of the world. Intelligence is not only noetic. Peirce’s philosophy, in addition to its notions of logic, includes the human apprehension of the mathematical continuum as a practical experience indispensable to the constitution of what we are, which Peirce called synechism.

The term “sign” in Peirce expresses the synthesis and full power of these two sides of his philosophy: a sign is a formal element in the encoding of reality, but carried by the background of experience and connection to the world that allowed its constitution. These are the two facets that the debate between LeCun and LLMs brings to light today: LLMs have probably gone as far as it is possible to go through pure noetic intelligence — and the result is already remarkable — but the final section that still separates us from the Singularity is to add to it sensory experience of the world and understanding of chains of causes and consequences.

When interpreting a scene from an image and a video, an LLM starts again from the multitude of pixels that constitute it. The cost of reconstructing and interpreting a scene in this way is enormous, even if LLMs manage it fairly well.

We know that human beings, and indeed any animal, proceed differently. We introduce an a priori that probably biases our perception but considerably accelerates scene interpretation: we posit that certain objects are in the foreground and that other elements of the scene are secondary or part of a fixed background. This a priori concerning the priority levels of objects is guided by notions of potential danger and varies according to species. In a human eye, a tiger will immediately take priority number one in scene interpretation, while a ladybug will be perceived as a decorative element of a pastoral scene. In the perception of an aphid, the ladybug will be perceived in the same way as a tiger is for us.

These mechanisms have been forged by millennia of species evolution. It has been proven that we always apply a filter to perceived reality, which consists in trying to predict, in the seconds to come, what possible developments in the scene may occur, and this for reasons of survival. What are the sources of potential danger? How can one be predictive in the chain of causes and consequences? A peaceful meal may potentially turn into a tragedy if someone chokes on food, even if the probabilities are low, yet they nevertheless call upon our vigilance during a meal. Intelligence about the world requires intelligence about its causalities.

This dimension is probably the last section to bring down before entering fully into the Singularity. When AIs already endowed with their current noetic power are endowed with this understanding of physical causality, I do not see what barrier will still separate us from the encounter with a being possessing an understanding of the world equal or superior to ours. Probably superior, because it will be able to combine several filters on reality as so many variants, not only the one human beings forged through natural selection.

And by “understanding” I mean dimensions that are not purely intellectual: qualities such as empathy, compassion, concern for future generations, respect owed to the dead, even friendship and love, could very well emerge from such beings, perhaps in forms different from ours, or even better ones…

Such a conception provides an answer to those who worry about the emergence of AI or who deny at all costs its junction with humanity for religious reasons. In previous articles, I proposed a definition of consciousness that escaped the self-centered and tautological biases of self-consciousness, saying that consciousness was above all consciousness of our relationship to the world, of the relation between ourselves and the continuity of the world. There is no self-consciousness without consciousness of our relationship to the world and of our responsibilities toward it. LeCun’s approach leaves open the possibility of being an atheist or of seeing in this connection to the world a form of divinity.

Those familiar with Spinoza know it: his vision of strict determinism and human conditioning is in no way incompatible with the possibility of free will and faith, however strange this reconciliation may seem. No doubt because the apparent and binary contradiction between determinism and free will lacks the tools of thought that would allow us to see the possibility of their being conjoined, just as we previously had only a naive apprehension of consciousness and meaning.

For Spinoza, the resolution of the conflict lies in the notion of responsibility: the more responsibilities we take on, the more constraints and determinations we assume, but the freer we are, because we increase our connections to the world. There is no abstract or “groundless” freedom; freedom can only be conceived in the attachments we take on toward the world around us.

Freedom does not consist in getting rid of our determinations but in voluntarily plunging into the heart of them and playing an active rather than passive role there: in Spinoza there are Stoic accents of amor fati. From then on, there is no longer a boundary between freedom in the abstract philosophical sense and degrees of freedom in mechanics. To be free is above all to become a nodal point of the world, irrigated by all its constraints, which are its links to reality. The dialogues we conduct with AI also lead to this teaching.

Dialogue with LLMs as thinking partners has not only made me feel the “force feedback” of their progress, to the point of sensing an intelligence superior to my own in my field of expertise.

In Data Science, as in any field of expertise, two types of knowledge are necessary. There is “pure” or universal knowledge, which consists of Machine Learning algorithms as well as mathematical methods such as signal processing, image processing, and so on. And there is “contingent knowledge,” such as whether a particular TensorFlow module is up to date or deprecated, whether it must be called in one way or another in a Jupyter notebook, and so forth. Until very recently, contingent knowledge had little added value, was thankless, constantly changing, and highly time-consuming. But it is indispensable to the completion of a project. All expertise requires fairly unglamorous material conditions in the low-level mechanics, which one must nevertheless master in order to move toward concrete realization.

LLMs have completely changed this landscape: they take charge of almost all contingent knowledge. Before AI, this knowledge took up 90% of the time required to complete a project, through tutorials or impenetrable documentation; now it represents only a tiny share. LLMs are a universal user manual for any open-source product. They can sift through thousands of pages of documentation in a few seconds and put their finger on the specific piece of knowledge that will allow a Python program to run correctly.

A significant share of time was also devoted to analyzing and understanding the format and structure of the data to be processed. The tedious steps and syntax one never remembers by heart — navigating directories and reading data — are taken care of without having to write a single line of code. Better still, the LLM analyzes the structures, explains them to us, and asks for confirmation that this or that field is indeed located in this or that place.

All that remains for the human being is to devote himself to pure knowledge. Even the implementation of machine-learning algorithms and the complex nuances of hyperparameters used to test variants are handled by the LLM. The sweep through these hypotheses is carried out by a simple instruction, without having to write a single line of code.

Within one or two hours, a Data Science project that would have taken several days or even several weeks is fully completed, down to the smallest technical finishing touches needed to make it run.

LLMs create a perfectly fluid cognitive world, without friction, where realization unfolds at the speed of thought. The Data Scientist who uses them feels an euphoria similar to the ideal life of the sage described by Aristotle, the theoretical life devoted to pure knowledge. The human being need only act on the major orientations, the major scientific choices of the project. My own experience still allows me to suggest algorithms more relevant than those proposed by LLMs, but for how long? The difference is becoming increasingly slight. Dialogue around these scientific hypotheses makes it possible to create models of a power we would never have dared dream of.

Earlier forms of expertise concerning contingent knowledge are not useless: human experience of their pitfalls helps correct the minor errors the LLM may make in implementing them. But these debugging phases are accelerated by a factor of 1,000, with human intervention limited to 1% of the time.

The profile of the future elites who manage to make their way in the world of AI, in our future world, will be that of dialogical intelligences. They will no longer be the hyper-specialized experts of the past, nor generalists requiring no training, but profiles situated between the two and, above all, highly gifted in interaction and permanent dialogue with AIs.

The emergence of dialogical intelligences is very good news. Our world has often been trapped between closed experts and frivolous, irresponsible generalists. In the world of expertise, I have more than once seen colleagues abuse contingent knowledge in order to create proprietary knowledge, a barrier no longer founded on excellence but on the retention of information with no added value yet indispensable to realization. Expertise has more than once suffered from these locked-down behaviors, which prevented a true meritocracy based on value-adding knowledge from being established.

Conversely, one must not think that pure generalists will be able to produce high-value knowledge through LLMs. Precise and conceptually complex knowledge will remain necessary in order to maintain a high-level dialogue with AI.

Our educational structures will resemble nothing we knew before the emergence of AI. We urgently need to adapt to the training of dialogical intelligences, endowed with solid training in universal scientific knowledge, but sufficiently curious and open-minded to deliberate with AIs. The closed and proprietary behaviors on which bad experts once relied will be filtered out by themselves: LLMs will leave them no chance.

European political leaders, and more particularly our French political decision-makers, are particularly ill-prepared for everything just said, remaining confined to rear-guard battles, still in denial about the cognitive tsunami already crashing over us. The legal mechanisms being considered are nothing but brakes on the few heroic French innovators who have understood what is happening. The creation of a company like Mistral is something of a miracle given the legal and regulatory barriers placed in its way. Arthur Mensch is a hero of our time.

The Singularity is being built before our eyes, day after day, through the work of all AI professionals and through that of minds sufficiently enlightened to know how to use it. Between 2023 and 2026, progress has been such that doubt is no longer possible: practice is sweeping away old theories and constructing the new one, compelling us to reconsider entirely our identity as human beings. Let us not allow the timidity and ignorance too often widespread in French political circles to prevail — for the survival of our country and, in the fine phrase of Professor Jean Dieudonné, for the honor of the human spirit.

(1) https://www.youtube.com/watch?v=vlEgeebg6eo

The Bayesian Imperative for Agentic AI

Shading the Planet: The Case for Climate Intervention with High-Altitude Balloons

“Whoever masters plastic will dominate the world” J. Tayefeh (interview)

This post is also available in:

FR DE




Source link