Researchers at Washington University in St. Louis are tackling a critical challenge in artificial intelligence: building systems that can not only learn and reason, but also independently find optimal solutions to highly complex problems. Unlike ChatGPT, which is an example of an AI trained on the exact steps needed for the problem, these machines approach puzzles without prior instructions and require a fundamentally different architecture. The team’s research, recently published in Nature Communications, combines quantum mechanical principles with an architecture modeled on human neurobiology to achieve consistent and reliable results. “Those are the two ingredients you need,” says Shantanu Chakrabarti, the Clifford W. Murphy Professor and associate dean for research at Washington University’s McKelvey School of Engineering in St. Louis, referring to the system’s core component, a hybrid approach designed to “find the needle in the haystack” that is guaranteed to succeed.
Discovery Machines: AI category distinctions and challenges
A new class of artificial intelligence is emerging that actively discovers solutions to complex problems rather than simply learning or reasoning. Familiar AIs like ChatGPT are great at answering questions right away, but researchers are now focused on building the most unusual of the three main AI categories. The effort, detailed in Nature Communications, is centered around a hybrid architecture that combines neuromorphic computing inspired by the human brain and principles of quantum mechanics. Researchers at Washington University in St. Louis are pursuing this approach with the goal of creating systems that can tackle problems that require more than just pattern recognition. “Imagine a machine that can not only find all possible solutions to a given puzzle, but also find the fastest and most optimized solution even when there are trillions of elements,” explains Shantanu Chakrabarty of Clifford W.
The core of this innovation lies in a specific method: neuromorphic-inspired automatic encoding combined with Fowler-Nordheim annealing, a technique borrowed from quantum mechanics. The autoencoder compresses a large stream of data, and the machine repeats the compression process until the prediction is accurate. Team architecture also provides convergence guarantees. A solution will emerge eventually, even if it takes months. This is a marked improvement over systems where researchers could wait indefinitely without success. “After six months, something useful will emerge,” Chakrabarty asserts, quoting The Hitchhiker’s Guide to the Galaxy’s famous “deep thinking” long calculation.
Neuromorphic quantum hybrid architecture for optimized solutions
Beyond the currently prevalent reasoning and learning machines, a more ambitious category of artificial intelligence, discovery machines, is gaining traction, requiring new architectural approaches to tackle previously unsolvable problems. Researchers have now demonstrated a path to building these systems with convergence guarantees, which is a significant advance over existing types of AI. The system utilizes an autoencoder to compress large data streams and repeats the compression process until the prediction is accurate. Complementing this are techniques borrowed from quantum mechanics that introduce controlled randomness, allowing machines to bypass computational bottlenecks and “tunnel” directly to optimized solutions. This approach offers significant advantages over traditional computing methods and could accelerate the path to breakthroughs. In some cases, if researchers don’t get the supercomputer’s prompts correctly, they can be left waiting as long as a year without results.
In the third category, discovery machines, things get really difficult.
Fowler-Nordheim Annealing and Auto-Encoding for Scalability
There is a growing field focused on building artificial intelligence that can make real discoveries rather than simply mimicking learned responses, and researchers are increasingly turning to unexpected combinations of physics and neurobiology to achieve this goal. Shantanu Chakrabarti, the Clifford W. Murphy Professor at Washington University in St. Louis, is refining the blueprint for these systems, which are designed to not only identify solutions but also optimize within highly complex parameters. The research, detailed in Nature Communications, focuses on a specific combination of automatic encoding and Fowler-Nordheim annealing. Autoencoders compress large data streams to enable pattern prediction, and the machine repeats the compression process until the predictions are accurate. However, tackling truly complex problems requires an efficient way to navigate the vast solution space. “This type of machine gives you that guarantee,” he said.
This study shows that these machines can consistently produce state-of-the-art solutions with high reliability and competitive time-to-resolution metrics, Chakrabarty said.
Guaranteed convergence and reliability in complex problem solving
A new kind of artificial intelligence is emerging that focuses on real discovery, rather than just processing information. These are the rarest of these machines and require solutions to problems with trillions of potential factors. Researchers are demonstrating a path to building these systems with guaranteed answers. In other words, you will find the answer, even if it takes a long time. This approach utilizes techniques to compress large data streams to enable pattern prediction, and combines methods to introduce controlled randomness by repeating the compression process until the prediction is accurate.
This is general enough that it can be applied to any complex problem.
