“If you don’t understand the second question: ‘Why aren’t things the way they are?’ then you’re not doing anything.” This scathing assessment by Professor Noam Chomsky cuts to the heart of the latest philosophical debate rocking the AI and neuroscience communities: the issue of scientific simplification. A recent special episode of Machine Learning Street Talk (MLST), “Simplifying Reality in Science,” brought together luminaries such as Chomsky, Carl Friston, Mazvita Chirimuta, François Cholet, and John Jumper to analyze how models, from physics’ infamous “spherical cow” joke to deep learning’s vast neural networks, shape and, in some cases, distort our understanding of reality itself. The central tension explored is whether a model’s usefulness implies its truth, an important question for founders and analysts betting on the inevitability of artificial general intelligence (AGI).
The episode begins with a tribute to Professor Carl Friston, the acclaimed neuroscientist who developed the Free Energy Principle (FEP). Friston’s concept attempts to explain all behavior—perception, behavior, and learning—in terms of a single mathematical quantity, effectively creating a grand unified theory of the brain. Friston himself admits that FEP is “almost tautologically simple” and repeats the famous physics joke of imagining a spherical cow in a vacuum to make the calculations more tractable. This gives rise to the “spherical cow problem”. When does a necessary simplification become a dangerous illusion that we mistake for the real thing?
This is the central concern of Professor Mazvita Chirimta, author of The Brain Abstracted: Simplification in the History and Philosophy of Neuroscience. Chirimuuta, who teaches at the University of Edinburgh, argued that the search for simple, universal fundamental principles that has characterized science since Galileo and Newton often blinds us to the messy realities we are trying to model. She framed the debate as a boxing match between two philosophical attitudes. Simplicius believes that the universe is fundamentally simple and ordered, and Ignorantio believes that successful models are merely useful fictions because our cognitive abilities are too limited to comprehend the true complexity of reality. Chirimuta agrees with Ignorantio and suggests that successful science only confirms human skill in constructing useful simplifications, not that nature itself is simple. She pointed out that if scientists abandon the goal of pure curiosity and pursue applied science (systems engineering to achieve desired results), the problem of oversimplification becomes less important as long as the tools work.
A modern manifestation of this philosophical gamble is the belief that the mind is software running on biological hardware. This metaphor, following earlier mechanistic analogies such as hydraulic pumps and telegraph networks, has solidified into a widely accepted ontological claim, especially in Silicon Valley. François Cholet, a deep learning researcher and influential voice in the field, proposed the “Kaleidoscope Hypothesis,” arguing that behind the surface of seemingly chaotic reality are simple repeating patterns, like the colored glass shards of a kaleidoscope, creating infinite complexity. For Chollet, intelligence is the process of mining experience to extract these “inherent atoms of meaning” or abstractions, which are then iterated and transformed. Joscha Bach took this even further, provocatively arguing that software is literally mind, rather than metaphor, because it represents patterns of causality that go beyond its physical foundations.
Chirimuta and other philosophers strongly opposed this promotion of metaphysics. They argued that the “sameness” that Bach sees between different chips running the same program is something we impose. It exists in our explanation and does not necessarily exist in the physical reality of different voltages or electrons. Computational models may be useful, but mistaking their elegance for the structure of reality is the fallacy of misplaced concreteness. Furthermore, Chirimuuta argued that the general belief in the inevitability of AGI (particularly among the technical community) may be a “cultural-historical fantasy” rooted in the mechanistic assumptions about the mind that we have inherited.
This criticism leads to an important distinction articulated by AlphaFold’s lead developer and Nobel Prize winner Dr. John Jumper: the distinction between prediction, control, and understanding. Jumper argued that while current AI systems are good at prediction (predicting future states) and control (manipulating outcomes), they are unable to achieve human-level understanding. In his view, understanding requires human involvement and requires a minimal set of facts that can be conveyed to other humans in a compact, fixed form, a theory that “fits on an index card.” If we accept black-box tools that work without understanding the underlying mechanisms, we risk being blindsided when those tools inevitably fail.
Professor Luciano Floridi provided a framework for navigating this complexity, highlighting the difference between metaphysics (the nature of reality itself) and ontology (how we construct the world based on our current perspectives and tools). Floridi suggested that the digital revolution has changed the ontology of the world around us, leading us to interpret humans as “informational beings.” This re-ontologicalization is useful for the development of technologies such as AI, but it is not a metaphysical truth. The mistake, he warns, is to ask absolute questions: “Is the universe a computational giant? Yes or no?” He thought it was meaningless because the answer depends entirely on the purpose and context of the investigation. After all, the simplification of reality is not a flaw in science, but an essential necessity given our cognitive and temporal limitations. The danger lies in forgetting that our models are maps, not the territory itself, and that a model’s usefulness is different from its ultimate truth.
