Chip manufacturing is an “ideal application” for NVIDIA acceleration and AI computing, NVIDIA founder and CEO Jensen Huang said Tuesday.
At the ITF World 2023 semiconductor conference in Antwerp, Belgium, Huang detailed how the latest advances in computing are accelerating “the world’s most important industry.”
Huang spoke via video to a gathering of leaders from the semiconductor, technology and telecommunications industries.
“We are excited to see NVIDIA accelerated computing and AI serving the global chip manufacturing industry,” said Huang, referring to accelerated computing, AI and semiconductor technology. We have detailed how manufacturing advances intersect.
AI, a step up in accelerated computing
The exponential increase in CPU performance has dominated the tech industry for nearly 40 years, Huang said.
But CPU designs have matured in the last few years, he said. Despite the surge in demand for computing power, the speed of semiconductor performance and efficiency is slowing.
“As a result, the global demand for cloud computing is causing data center power consumption to skyrocket,” Huang said.
Huang said a new approach is needed to reach net zero while supporting the “immeasurable benefits” of increased computing power.
This challenge is natural for NVIDIA, a pioneer in accelerated computing that combines the parallel processing capabilities of GPUs and CPUs.
This acceleration, in turn, sparked the AI revolution. Ten years ago, deep learning researchers such as Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton discovered that GPUs could become cost-effective supercomputers.
Since then, NVIDIA has reinvented the computing stack for deep learning, opening up “multi-trillion-dollar opportunities in robotics, self-driving cars and manufacturing,” Huang said.
By offloading and accelerating computationally intensive algorithms, NVIDIA routinely speeds up applications by 10-100x and reduces power consumption and cost by an order of magnitude, Huang explained.
AI and accelerated computing are working together to transform the technology industry. “We are going through the transition of his two platforms simultaneously, Accelerated Computing and Generative AI,” Huang said.
AI, Accelerated Computing Comes to Chip Manufacturing
Huang explained that advanced chip manufacturing requires more than 1,000 steps to create biomolecule-sized features. Each step should be nearly perfect to get a functional output.
“Sophisticated computational science is performed at every step to calculate features to be patterned and perform defect detection for inline process control,” said Huang. “Chip manufacturing is an ideal application for NVIDIA acceleration and AI computing.”
Huang outlined some examples of how NVIDIA GPUs are becoming increasingly integral to chip manufacturing.
Companies such as IMS Nanofabrication and NuFlare use electron beams to manufacture mask writers (machines that create photomasks, stencils that transfer patterns onto wafers). D2S builds multi-rack computing appliances for mask writers. NVIDIA GPUs accelerate these computationally intensive tasks of mask writer pattern rendering and mask process correction.
Semiconductor manufacturer TSMC and equipment providers KLA and Lasertech use extreme and deep ultraviolet light (DUV), known as EUV, for mask inspection. NVIDIA GPUs again play a key role in processing classical physical modeling and deep learning to generate synthetic reference images and detect defects.
KLA, Applied Materials, and Hitachi High-Tech use NVIDIA GPUs for e-beam and optical wafer inspection and review systems.
And in March, NVIDIA announced a collaboration with TSMC, ASML, and Synopsys to accelerate computational lithography.
Computational lithography simulates Maxwell’s equations for the behavior of light as it passes through an optical system and interacts with photoresist, explained Huang.
Computational lithography is the largest computational workload in chip design and manufacturing, consuming tens of billions of CPU hours per year. Large data centers operate 24/7 to create reticles for new chips.
Introduced in March, NVIDIA cuLitho is a software library with tools and algorithms optimized for GPU-accelerated computational lithography.
“We’ve already made things 50 times faster,” says Huang. “Tens of thousands of CPU servers can be replaced with hundreds of his NVIDIA DGX systems, reducing power consumption and costs by orders of magnitude.”
The savings could reduce carbon emissions or allow new algorithms to go beyond 2 nanometers, Huang said.
what’s next?
What will be the next wave of AI? Huang described a new kind of AI, or “embodied AI,” intelligent systems that can understand, reason about, and interact with the physical world.
He said examples include robotics, self-driving cars and even chatbots that have become smarter by understanding the physical world.
Huang introduced the audience to NVIDIA VIMA, a multimodal embedded AI. VIMA can perform tasks such as “rearrange objects to fit this scene” from a visual text prompt, Huang said.
It learns concepts such as “this is a widget”, “that is a thing”, and “put this widget in that thing” and act accordingly. We can also learn from the demonstrations and stay within the designated boundaries, Huang said.
VIMA runs on NVIDIA AI, and its digital twin runs on NVIDIA Omniverse, a 3D development and simulation platform. Huang said physics-based AI can emulate physics and learn to make predictions that follow the laws of physics.
Researchers are building systems that massively mesh information from the real and virtual worlds.
NVIDIA is building a digital twin of the Earth called Earth-2. It first predicts weather, then long-range weather, and finally climate. NVIDIA’s Earth-2 team created FourCastNet, a physical AI model that emulates global weather patterns 50 to 100,000 times faster.
FourCastNet runs on NVIDIA AI and the Earth-2 digital twin is built on the NVIDIA Omniverse.
Such systems promise to address some of the biggest challenges of our time, such as the need for cheap and clean energy.
For example, researchers from the UK Atomic Energy Agency and the University of Manchester used physics AI to emulate plasma physics and robotics to control reactions and create a digital twin of a fusion reactor that sustains a burning plasma. I’m here.
Huang said scientists can explore hypotheses, improve energy yields, perform predictive maintenance and reduce downtime by testing hypotheses on digital twins before starting physical reactors. . “The reactor plasma physics AI he runs on NVIDIA AI, and its digital twin he runs on NVIDIA Omniverse,” Huang said.
Such systems are expected to further advance the semiconductor industry. “We look forward to seeing physics, AI, robotics, and omniverse-based digital twins drive the future of chip manufacturing forward,” Huang said.