Why Humans Can’t Understand AI

Machine Learning


Understanding the goals behind these systems is essential to understanding what these technical details have come to mean in practice. In a 1993 interview, neural network scientist Teuvo Kohonen said that a “self-organizing” system was “my dream” and “like what our nervous system does instinctively. ” concluded to operate. As an example, Kohonen described a system that monitors and manages itself “that could be used as a monitoring panel for any machine…any plane, jet, any nuclear power plant, or any car.” This, he thought, meant that in the future, “we could immediately see what the system was like.”

The overarching aim was to have a system that could adapt to its environment. It works instantaneously and autonomously, like a nervous system. The dream was to make the system work without much human intervention. The complexity and unknowns of the brain, nervous system, and real world will soon inform the development and design of neural networks.

Mimicking the brain – layer by layer

When discussing neural networks, you may have noticed that brain images and the complexity they cause are never far away. The human brain served as a kind of template for these systems. Especially in the early days, the brain has become a model for how neural networks work, although it is still one of the big unknowns.

These experimental new systems are therefore modeled after something whose function itself is largely unknown. Neurocomputing His engineer Carver Mead speaks openly about the concept of the “cognitive iceberg” which he found particularly appealing. What we perceive and see is only the tip of the consciousness iceberg. The size and shape of the rest remain unknown below the surface.

In 1998, James Anderson, who had been working on neural networks for some time, said of the study of the brain, “Our main finding seems to be the realization that we don’t really know what’s going on.” I was.

In a detailed 2018 Financial Times article, technology journalist Richard Waters wrote that neural networks “are modeled on theories about how the human brain works, and emerge with identifiable patterns.” We are passing data through layers of artificial neurons to “Unlike the logic circuits employed in traditional software programs,” Waters said, “there is no way to trace this process to pinpoint why the computer is reaching a particular answer.” , Waters suggested. Waters concludes that these results are irreversible. Applying this kind of brain model and getting the data layer by layer means that the answer cannot be easily traced. Multi-layering is a good part of the reason for this.

“Adaptation is the whole game”

Scientists like Meade and Kohonen wanted to create systems that were truly adaptable to that world. It will react to that condition. Mead underscored the value of neural networks in their ability to facilitate this type of adaptation. Reflecting on this ambition at the time, Meade added that it was “the whole game” to produce adaptations. He thought this adaptation was necessary, and concluded that it was “too variable to do anything absolute”, “because of the nature of the real world.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *