Lessons learned from the deep learning revolution

Machine Learning


Image source: 123RF (corrected)

Deep learning is a hot topic today. There’s no shortage of media coverage, papers, books, and events on deep learning.

But deep learning is nothing new. Its roots go back to the dawn of artificial intelligence and computing. Although the field has been frowned upon for decades, a few scientists and researchers have stepped forward with the belief that the idea of ​​artificial neural networks will one day come to fruition.

And we’re seeing results from deep learning in everyday applications like search, chat, email, social media, and online shopping.

One of these scientists is Terrence Sejnowski, a pioneer in the field of computational neuroscience and a longtime researcher of artificial neural networks.in his book deep learning revolutionSejnowski looks back on the history of the field.

In an interview with TechTalks, Sejnowski talked about deep learning’s early struggles, its explosion into the mainstream, and lessons learned from decades of research and development.

Alternate path mapping for AI

Sejnowski became interested in artificial intelligence during the heyday of symbolic AI. At the time, scientists and engineers were trying to create AI by hard-coding rules and knowledge into computers.

Sejnowski was part of the “Connectionist” camp, a small group of scientists inspired by the biological fabric of natural intelligence. Connectionist AI focuses on areas such as machine learning, especially artificial neural networks. The idea behind connectionism is to recreate how large populations of neurons interact to learn representations of different things and concepts.

In the early decades of AI, symbolism got a lot of attention and connectionism got lost. Symbols are high-level, abstract representations of intelligence that are very easy to understand and can be codified into computer programs. Symbolic AI quickly became proficient at difficult logical problems, such as performing complex math and playing chess.

“If you look back at the early days of AI … the idea was that there was a computer that could draw a circle around nature,” Sejnowski told TechTalks. “In fact, I didn’t have to pay attention to how nature solves problems.”

But symbolic AI also ignores some important but fundamental intelligence details.of deep learning revolution, Sejnowski writes: The problem with symbols is that they are so compressed that it is difficult to ground them in the real world. ”

For example, the word “chair” is a symbol for all kinds of chairs, regardless of appearance, number of legs, with or without armrests, with or without wheels, or with or without chairs. I stood on the edge of a cliff. It’s very difficult to symbolize all these aspects of a chair. It gets even more difficult when you get into areas like vision. It’s virtually impossible to write a rule-based program that can detect all kinds of chairs from different angles.

“The only existing evidence that we can solve any of these problems is that we can solve speech recognition, language, all of the very difficult problems that people were working on at the time and were not making much progress. We solved them,” said Sejnowski. “And what we tried to do was at least try to understand the fundamental principles underlying the astounding performance of not only humans, but animals in general.”

lessons learned from nature

“The Deep Learning Revolution” by Terrence Sejnowski

In the 1980s, other connectionists such as Sejnowski and Geoffrey Hinton were said to be doing silly things because they stuck with neural networks. Symbolic AI dominated the United States’ most prestigious universities and laboratories. .

“They had good reasons,” says Sejnowski. “It wasn’t just because they didn’t like what we were doing. They were smart people. I was told that the model would overfit, and optimization experts told me that this is a non-convex problem. I was told there is, and I can’t find the best solution.”

But as symbolic AI continues to hit roadblocks, deep learning began to make progress, albeit small in the beginning. As time went on, the connectionists’ hard work paid off. Advances in technology have allowed researchers to create very large neural networks and train them on large numbers of examples. Ultimately, it turns out that deep learning models with many parameters can learn features previously thought impossible. In addition, we were able to solve many problems, such as image classification and language processing, which had no clear solutions until now, using a symbolic approach.

“It’s been possible to simulate these larger networks until computers are much faster and better at doing multiplications and additions, allowing us to investigate. What I did was that what the experts were telling us didn’t seem to work.

Contrary to what early generations of AI scientists predicted, large-scale neural networks were not stuck in local minima. We were also able to generalize enough to avoid the overfitting that scientists had warned about. Today, deep learning has not only proven its scientific viability and is a mainstay of many major applications, but it has its own problems to solve.

Looking back at decades of neglect for deep learning, Sejnowski said: The reason is that everything is hidden from us. We don’t know what we look like. And there is no reason for nature to inflict it on us. ”

The brain is a highly complex, high-dimensional engine that can process a lot of sensory data and integrate it with past experiences and memories. What our intuition tells us about how the brain works is a very abstract, low-dimensional, self-made description that doesn’t explain everything that goes on inside.

“The bottom line is that you can’t trust your intuition. And what people in the AI ​​field were trying to do was write programs to automate what their intuition determined to be intelligence.” was,” said Sejnowski.

Artificial intelligence meets human intelligence

As a neuroscientist, Seinowski makes some very interesting observations about natural and artificial intelligence.of deep learning revolution, he wrote: The big difference between the two types of intelligence is that while human intelligence took millions of years to evolve, artificial intelligence has evolved along trajectories. Measured in decadesThis is the warp speed of cultural evolution, but fastening your seat belt may not be the right response. ”

Interestingly, Sejnowski’s book was published in 2018, before the explosion of generative models and large language models. What Deep His Learning is doing now is amazing even by the standards of a few years ago. Today, even “decades” seems like an understatement.

“What we’ve seen in the last few years is the kind of great expedition that Lewis and Clark go into the wilderness and discover things,” Sejnowski said. “It seems to be accelerating. But I think it’s the natural progression of exponential growth in computing power. No other technology has grown exponentially over such a long period of time.”

At the same time, we have also seen some developments that have brought about major changes in this area. One of them is the invention of Transformers, the main architecture used in LLM. Transformers is the culmination of decades of research in various areas of deep learning. They are very efficient at processing sets of data such as text, software code, image pixel patches, and molecular data.

“Transformers was not what I expected. I don’t think anyone expected it.

But what makes these developments particularly exciting is the way they feed back to their origins. I have been researching. Advances in deep learning are now helping us discover new ways to study the brain.

“For me, the most exciting thing is that for the first time in AI, neuroscientists are interacting with humans, engineers and computer scientists in AI, because they now have a common vocabulary,” Sejnowski said. said. “In my own research, I use these tools to understand the brain and analyze recordings from it. There has been a tremendous revolution in neuroscience, which has really changed neuroscience.”

Deep learning and realization

Image credit: Depositphotos

Deep neural networks have made remarkable progress in recent years, but they also have fundamental flaws that need to be fixed. These flaws can be seen in many applications of deep learning, such as adversarial examples of computer vision systems and elementary mistakes in LLM.

One explanation for the shortcomings of artificial neural networks is their lack of instantiation. For example, LLM mimics some of the advanced capabilities of human intelligence without the rich sensorimotor experience humans have.

of deep learning revolutionSejnowski explains that embodiment and constant learning are crucial to human intelligence. Learning is a process that accompanies growth and continues well into adulthood. Learning is therefore central to the development of intelligence in general. ”

In a recent paper, Sejnowski lists seven missing elements of LLM. One of them is the lack of “direct sensory experience of the world.” But it also goes beyond that.

Other shortcomings of current AI systems, such as lack of common sense and causality, are also deeply related to world experience and lifelong learning. Emotion and empathy, which are often ignored in AI, are also important aspects of intelligence.

“Cognition and emotion have traditionally been considered separate functions of the brain. There are subcortical structures that regulate, high levels of emotion, and structures like the amygdala that are particularly involved during times of fear, but these structures interact strongly with the cerebral cortex,” wrote Sejnowski. Deep learning revolution.

“These large-scale language models only focus on cortical architecture. “And of course, we shouldn’t expect to understand grounding with just this thin layer.”

This is a problem that also existed in previous generations of AI systems, which were built on logical constructs and not on real-world experience. AI systems today can reproduce many human-like behaviors. But without the foundation that the organic brain has, we make a very different kind of mistake than humans make.

“What is remarkable to me is that the language model worked very well without having such a foundation. Separating and understanding the differences is very important for us,” said Sejnowski. said. “I think the difference between the mistakes that large language models make and the mistakes that we make will become very clear. But this is a good thing. By understanding what we’re missing, we can make progress.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *