An alternative to deep learning can help AI agents play the gameplay real world

Machine Learning


New machine A learning approach that draws inspiration from the ways in which the human brain appears to model and learn the world has proven to be able to master many simple video games with impressive efficiency.

A new system called Axiom provides an alternative to the artificial neural networks dominating in modern AI. Developed by a software company called Verse AI, Axiom has prior knowledge of how objects physically interact in the game world. Next, we use an algorithm to model how we expect the game to work in response to inputs that are updated based on a process called active inference.

This approach draws inspiration from theories that attempt to explain intelligence using principles of free energy, mathematics, physics, and information theory from biology. The Principles of Free Energy was developed by renowned neuroscientist Carl Friston, the lead scientist of poetry at the “cognitive computing” company.

Friston said in a video from his London home it could be particularly important for building AI agents. “They have to support the kind of cognition we see in our real brain,” he said. “It requires consideration not only to learn things, but also to learn how you actually act in the world.”

Traditional approaches to playing games include training neural networks through what is known as deep reinforcement learning. This approach can generate superhuman gameplay algorithms, but requires a lot of experimentation to work. Axiom masters a variety of simplified versions of popular video games called Drive, Bounce, Hunt, and Jump, jumping with much less examples and less computing power.

“The general goals of the approach and some of its important features track what we consider to be the most important issues to focus on to reach AGI,” says AI researcher François Charette, a benchmark designed to test the capabilities of modern AI algorithms. Chollet is also exploring new approaches to machine learning, using his benchmarks to test the capabilities of models to learn how to solve unfamiliar problems rather than simply mimicking previous examples.

“I think the job is very original, and that's great,” he says. “More than ever before, away from the beaten paths of large-scale language models and inference language models, more people need to try new ideas.”

Modern AI relies on artificial neural networks that are loosely inspired by brain wiring, but work in a radically different way. Over the past decade, deep learning, an approach using neural networks, has allowed computers to do all sorts of impressive things, such as transcribing speeches, recognizing faces, generating images, and more. Recently, of course, deep learning has led to large-scale language models, driving garulou-like, increasingly capable chatbots.

In theory, axioms promise a more efficient approach to building AI from scratch. Verses CEO Gabe René may be particularly effective in creating agents that need to be learned efficiently from experience. Rene says that a finance company has begun experimenting with the company's technology as a way to model the market. “This is a new architecture for a much smaller, more accurate, more efficient, and much smaller AI agent that can be learned in real time,” says René. “They are literally designed like digital brains.”

Ironically, given that axioms provide alternatives to modern AI and deep learning, the principle of free energy was originally influenced by the work of British Canadian computer scientist Jeffrey Hinton. Hinton was a colleague of Friston at University College London.

For more information about Friston and The Free Energy principles, we highly recommend this 2018 Wired Feature article. Friston's work also influenced the exciting new theory of consciousness explained in the wired book reviewed in 2021.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *