To solve the AI ​​energy crisis, “we need to rethink the entire stack, from electrons to algorithms,” says Stanford professor • The Register

Machine Learning


Stanford University's Human-Centered Artificial Intelligence Institute (HAI) on Wednesday celebrated the fifth anniversary of “herding cats,” an initiative to guide the responsible development of machine learning.

After an optimistic introductory remarks from HAI leaders about the relevance of designing systems that enhance humans rather than replace them, the opening panel made it clear that artificial intelligence will increasingly be informed by our understanding of human intelligence.

While the goal of HAI is to put people and communities at the center of AI design, human-centered AI also reflects the growing importance of neuroscience.

Simply put, the human brain is orders of magnitude more energy efficient than silicon-based processors, not to mention the obvious intellectual advantages and inference and learning capabilities of wetware.

Where computing failed was in digital decisions…biology is totally different

“Unfortunately, where computing failed was with digital decisions,” Surya Ganguly, an associate professor of applied physics at Stanford University, told scientists, academics and other experts gathered at the HAI at Five conference today.

“We decided to store information in bits, which are stored and flipped by shuttling many, many electrons through complex transistor circuitry. The laws of thermodynamics require a lot of energy consumption to flip bits quickly and reliably, so we waste a lot of energy on the intermediate sets of calculations.

“The biology is totally different: the final answer is good enough, and the intermediate set is all slow, noisy, and unreliable. But not so unreliable that the final answer isn't good enough for what's required…So to really go from megawatts to watts, I think we need to rethink the entire technology stack, from electrons to algorithms.”

Surya Ganguly, Associate Professor of Applied Physics at Stanford University, speaking at HAI today… Click to enlarge

AI's vast and growing energy demands are a significant problem that needs to be solved, and other inefficiencies, such as the differences between how machines learn and how children learn, are areas of active research for some of the panelists.

Numenta founder Jeff Hawkins argued that sensory-motor learning, rather than today's AI, will be central to the science of artificial and natural intelligence.

Towards that goal, Hawkins announced that the Bill & Melinda Gates Foundation has funded the company's Thousand Brains project, a general-purpose AI framework that aims to reverse engineer the human neocortex, and Hawkins said the open-source code will be made public.

It would be very difficult to merge a machine with a brain… I don't think we want to do that.

Speaking at a panel discussion at the University of California, Hawkins reassured that the interaction of artificial and human intelligence would not be cybernetics, or the merging of man and machine, saying, “It would be very hard to merge a machine with a brain.”

“But more importantly, I don't think we want to do that. At least I don't want to do that.”

While acknowledging that direct connections to the brain could have valuable uses — helping paralyzed people, for example — Hawkins said the focus of such research should be on developing tools to help people.

“I don't think we'll all have cables coming out of our heads, but I could be wrong,” he said.

Ganguli believes the science of the mind will influence how machine learning technologies stack up.

“I think the trick is to understand what these design principles are and then implement them in your AI systems,” Ganguli says. “Right now, Transformers scale just by adding more layers or increasing the embedding dimension, and that's it. There are no deep principles in it.”®



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *