AI has a competency problem. Even a simple graphics processor for AI applications consumes hundreds of watts. But what is the most powerful computer for machine learning? This award goes to Frontier, which consumes 20 megawatts a year. That's $40 million in power.
Frontier boasts exaflops of computing. That means it boasts 1 billion floating-point mathematical calculations per second, or about the same amount of computing as our brains. The only difference is that our brain requires only 20 watts of energy, the same as a light bulb.
This energy constraint is increasingly hindering the progress of AI. But what if we could take a cue from the way our brains process information? We recently spoke to Nigel Toon, founder of IPU chip maker Graphcore, for some answers. I heard.
How to use noggins: Is the key in the “spikes”?
AI algorithms navigate complex multidimensional graphs to process and analyze data. The structure of connections is very similar to the structure of the brain. Nigel Toon explains:
“That connection is how your brain works. Neurons are connected by axons and synapses, but the brain actually has no memory. Everything you know is stored in the connections between neurons. The importance of what you know is based on how the neuron decides whether that information is important to the process you are trying to perform. A neuron can have 10,000 connections coming into it. Of those 10,000 things, which one is important to this particular decision?”
In the brain, the task of processing information relies on binary electrical impulses called spikes. These spikes last a few milliseconds and are always of equal voltage. The information lies in the length of the interval between spikes.
This is where it differs from traditional AI systems. These multiply large matrices by real numbers and store the information in exact values. This results in computational requirements and a large amount of wasted energy.
So can AI research harness this brain power? FAU researchers believe so. They focus on artificial nerve cells called long short-term memory units (LSTMs) that are modified to mimic the membrane potential of biological cells. This allows them to function like brain neurons that use spikes to transmit and process information. And the results are promising.
What about quantum computing?
While researchers look to the human brain to answer questions about AI capabilities, others are touting the potential of quantum computing. This hype has reached near-stratospheric levels in some quarters, with IBM recently claiming that the “paradigm-shifting capabilities” of quantum artificial intelligence (QAI) will enable “nearly limitless possibilities.”
But do we need to be more realistic?
probably.
Quantum computing uses qubits (qubits made from atoms, superconducting circuits, or other particles). Qubits encode data not only as 0 or 1, but also in states that represent both simultaneously. This state is called superposition, and allows each qubit to encode exponentially more information.
Quantum processors can therefore solve complex calculations much faster than classical computers, potentially reducing calculation times from thousands of years to minutes. Unlike deterministic classical computers, which compute each step sequentially, quantum computers process vast data sets in a probabilistic way, almost simultaneously, to find the most likely solution to a problem. Significantly improve efficiency.
This sounds like an ideal solution to the increasingly demanding computing requirements of AI, right?
But there's a problem.
Qubits are inherently error-prone. As a result, you may need to comprehensively fix more than 10 billion system errors per second to reach your destination. As Nigel Toon explains, the problem lies in the question: “Can you come up with a structure that would allow you to replicate qubits in a way that doesn't actually replicate the error function, rather than increasing the size of the machine?'
And no one has done it yet.
Transition to molecules
A more promising avenue may be molecular computing. The idea is to use molecules to perform calculations and exploit their chemical and physical properties to process information and solve problems at the molecular level. Nigel Toon explains:
“There is research being done into the use of DNA to store data and information, and how we can use the structure of proteins to change shape, representing them as switches like transistors. The problem is getting that protein to move and change its shape, which requires inserting some chemical. And how do you insert chemicals to make them scalable and grow?”
But scientists are already developing new biocomputing chips that use DNA substrates to perform calculations, including mathematical operations key to AI training and big data processing.
Where next?
As we grapple with the energy-intensive demands of modern AI systems, we are learning to take the potentially transformative path of mimicking the energy efficiency of the human brain.
Quantum and molecular computing, on the other hand, hints at a future where AI can operate not only with greater computational power but also with the kind of energy efficiency that nature has perfected over millions of years.
This marriage of biological inspiration and technological innovation holds the key to overcoming the current limitations of AI and could usher in a new era of sustainable and powerful computing solutions.
Macro Hive AI
Macro Hive's vision is to be the most trusted partner in the financial industry, leveraging the synergy of natural and artificial intelligence to revolutionize market insights and decision-making.
We aim to lead the industry by setting the highest standards for accuracy, reliability and innovation, enabling financial institutions to navigate complex financial markets with unparalleled precision and foresight. Masu.
We have already developed a wealth of AI-powered financial analysis tools that help our clients interpret the market and drive revenue. Learn more about our work here.