//php echo do_shortcode('[responsivevoice_button voice=”US English Male” buttontext=”Listen to Post”]') ?>
The world's largest neuromorphic computer was commissioned by Sandia National Laboratories. Hala Point uses 1,152 Intel 2nd generation Loihi 2 chips for a total of 1.15 billion neurons. Perform inference on both brain-inspired spiking neural networks (SNNs) and mainstream deep learning-based neural networks (DNNs). The system was built by Intel on behalf of Sandia and is a research prototype used by Sandia researchers.
The scale of such systems is critical, Mike Davis, director of Intel's Neuromorphic Computing Lab, told EE Times.

“Despite the fact that many others in the field believe that the edge is where neuromorphics will probably be first commercialized, we are still in the direction of scale-up from a basic science perspective. “So we all ultimately have the scale of the human brain in mind,” he said. “But also, as we look at the rise of deep learning and traditional AI, scale is becoming increasingly important, although there is clearly a lack of understanding of how to program and train large-scale neuromorphic systems. , we want to stay ahead of algorithmic software research to provide the hardware headroom and, of course, bring Loihi 2 to the world at scale.”
Hala Point is powered by 1,152 Intel Loihi 2 chips in a 6U chassis, with a total of 1.15 billion neurons and 128 billion synapses distributed across 140,544 cores. It also includes 2,300 x86 processors for auxiliary computing. Hala Point's power envelope is 2.6 kW.


Global Unichip Corp. April 18, 2024

Written by Shruti Usgaonkar, Principal Engineer, Microchip Technology, April 18, 2024
Intel's largest neuromorphic system to date, Pohoiki Springs, consists of 768 first-generation Loihi 1 chips. Hala Point is the first large-scale system built using the second generation Loihi 2 chip. Compared to Loihi 1, Loihi 2 has more chip-to-chip communication links per chip, allowing it to connect to other chips in a three-dimensional array. Hala Point plans to use this feature, which Davies says is the biggest departure from his Pohoiki Springs architecture at a system level. The bandwidth of the chip-to-chip communication link is also substantially higher than in Loihi 1, as improvements have been made to minimize redundant traffic. Loihi 2 has a chip-to-chip communication bandwidth of 5 TB/s.

mainstream workloads
Hala Point will be the first large-scale neuromorphic system to perform both SNNs and sparse feedforward DNNs, such as those used in mainstream AI today (although DNN conversion and retraining are required). is). This is due to Loihi 2's support for graded spiking (up to 8 bits) and programmable neurons.
“This is the first time we have demonstrated that large-scale neuromorphic systems can support standard deep learning workloads at competitive efficiency levels,” Davis said. “There's been a lot of focus on the edge, which is a very small network on analog lines, but if you have the right small network, you can potentially benefit from it, with some caveats. But scaling them up is an entirely different matter.”
Although Intel and Sandia have not yet run a recognizable DNN on Hala Point, they have demonstrated a basic form of DNN (multilayer perceptron) as a proof of concept. This initial work allows Hala Point to be characterized at 20 POPS or 15 TOPS/W (INT8) without batch processing, making it more power efficient than some of today's data center AI accelerators, including GPUs. Masu.
Both SNN and DNN workloads are enabled by the same underlying properties of Loihi 2's architecture, Davies said. There is diversity in Spike's approaches, but they all come down to sparsity and fine-grained parallelism, and transformed DNNs are just one of the workloads that can take advantage of these new features. , he added.
“Currently, we are on this dual path and are still very interested in new brain-inspired algorithms, such as optimization functions that have yielded exciting results. But on the other hand, we wants to find a way to speed up standard deep neural networks by exploiting the unique properties of neuromorphic architectures,” he said. “There is a caveat: this is not intended to be just a better deep learning ASIC system, it is not that simple and straightforward. But by diluting connectivity and diluting activity, these You can now convert your network into a format that runs very well. [Loihi’s] architecture. “
Sandia researchers have some creative and interesting technologies they plan to pursue for this transformation, and believe the commercial relevance of mainstream AI will result in research that is “a stone's throw away.” Davis said.
Davies' team recently presented a paper at ICASSP. This paper describes the basic methodology for converting feedforward networks and applies it to several small-scale video and audio examples. The process of converting a feedforward DNN to run efficiently on the Loihi 2 chip involves sparsification. This can be achieved by programming neurons to be stateful (with memory), allowing for temporal sparsification capabilities. The current conversion is a more manual process than Davis would like, but he says he is currently working on converting larger and larger networks, up to YOLO-sized networks. said.
In a previous conversation with EE Times, Davies identified software and toolchains as key areas holding back neuromorphic computing. Since then, Loihi's open-source software framework, Lava, has updated its compiler to better optimize the mapping to hardware.
“Hala Point makes that need very clear: it's not a matter of mapping the network onto one or a few chips, it's actually scaling out. The mapping algorithm itself is difficult to optimize. “So the scalability of the software compilation process is certainly a bottleneck,” he said.
Currently, developers have to work at a relatively low level of abstraction, but Davis noted that Hala Point will be used by Sandia's specialized researchers rather than Intel's entire neuromorphic research community. .
“These users are very willing to work on the lowest level of programming to get convincing results,” he said.

Sandia researchers will use Hala Point for brain-scale computing research, including scientific computing problems such as device physics, computer architecture, computer science, and informatics. In the commercial world, Loihi users in industry are demonstrating efficiency optimization of communications infrastructure (Ericsson's work on 5G signal optimization from mobile base stations used Loihi for both DNN and optimization problems). ). There is also growing interest from the aerospace and defense sector in drones and other SWaP-constrained edge applications. Vehicle interior monitoring is another use case.
For now, Hala Point is limited to Sandia researchers, but Intel plans to follow up with other large-scale systems that will be available to Intel's entire neuromorphic research community, Davis said. This community includes over 200 of his research groups across academia and industry.
