Nvidia, known for its computer chips powering realistic video game graphics, has transformed into an AI pioneer with its chips now at the core of crucial technology. CEO Jensen Huang’s belief in the potential of Nvidia’s GPUs for AI led to the development of an AI server and the creation of OpenAI’s ChatGPT. Nvidia’s dominance in the market has seen data-center operators spending billions on its chips. Despite concerns about AI’s impact, Huang remains optimistic and believes in advancing AI to ensure its safety.
How Nvidia Became ChatGPT’s Brain and Joined the $1 Trillion Club
By Austin Carr and Ian King for Bloomberg Businessweek
The first time Jensen Huang tried ChatGPT, he asked it to write a poem about his company. Huang, who’d made a bet more than a decade ago that Nvidia Corp.’s computer chips could serve as the brains for artificial intelligence, was pleased with the result: “NVIDIA rises to the challenge. / With their powerful GPUs and AI, / They push the boundaries of technology’s edge.” The robo-poem was evidence, by his literary standards anyway, that the wager was finally paying off.
For much of the past 30 years, Nvidia chips have been the main engine for ultrarealistic explosions and lush foliage in video games such as Call of Duty and Counter-Strike, but Huang strongly suspected they were also uniquely suited to sift through the massive data sets that artificial intelligence requires. To help test this theory, he instructed his team to build a server designed for AI and hand-delivered the first one in 2016 to Elon Musk and Sam Altman, the founders of OpenAI. Billed as an AI supercomputer, the $129,000 rig was the size of a briefcase and contained eight interconnected graphics processors that could digest in two hours what would take a traditional computer processor more than six days. Huang personally brought it to the startup’s office as a gift, and as he gestured to the components, Musk beamed at the silvery box like a proud father.
Since then, Musk and Altman had an acrimonious split, but they’re aligned in one way: each has sought access to Nvidia chips for different projects. OpenAI released ChatGPT late last year, with a brain composed of more than 20,000 Nvidia graphics processors. In February, according to the research firm Similarweb Ltd., the chatbot hit 100 million users, which would be a triumph for OpenAI if it weren’t so expensive to run. Microsoft Corp. has pledged more than $10 billion in funding, which will help cover rising computing costs, and Altman, the startup’s chief executive officer, will need tons more chips from Nvidia to keep up with demand. Huang doesn’t use ChatGPT much, he says, but signed up for the $20-a-month version Altman’s company offers. “He needs the money,” Huang jokes.
And so, too, will just about any company that wants a piece of the AI boom. Nvidia chips are a critical component of the cloud infrastructure that Alphabet, Amazon and Microsoft use. Data-center operators collectively spent $15 billion last year on bulk orders with Nvidia. “You’re going to see tons and tons of ChatGPT-like things,” Huang says in a May 17 interview at Nvidia’s headquarters in Santa Clara, California. “This is basically a rebirth, a reinvention of computing as we know it.”
A week later, Huang showed investors what that rebirth means for Nvidia’s business. Quarterly revenue from data centers—which Nvidia now calls “AI factories”—jumped 14%, to a record $4.28 billion. Its summer sales forecast was 53% higher than analysts expected, hurling its valuation past $1 trillion. It was only the ninth company ever to reach that mark. Overnight, Nvidia grew by almost the entire market cap of one longtime rival, Advanced Micro Devices Inc. (AMD), and is now worth seven times another, Intel Corp. At least three Wall Street analysts used the same word in the titles of their reports: “Wow.”
How Huang orchestrated this transformation from video game chipmaker to AI pioneer is often attributed to his magical ability to see into the future. His deputies will explain it only with anodyne corporate platitudes. Ian Buck, vice president for high-performance computing, says Nvidia is a startup that acts as one team with no corporate politics, reciting versions of phrases 11 of his colleagues used in interviews with Bloomberg Businessweek. It sounded as if they’d been force-fed the same generic training data as ChatGPT.
The reality is that Huang has been wrong almost as much as he’s been right. Nvidia blundered its approach to smartphones, released several computer graphics cards that bombed, evangelized short-lived fads (“ crypto mining is here to stay”) and got outmaneuvered by regulators and rivals on its $40 billion attempt to acquire the chip designer Arm Ltd. Huang exhibits a deeply programmed sense of survival. It can involve coldly killing a project the millisecond he realizes Nvidia can’t win or humiliating senior staff to make a point. He speaks with pride about almost going out of business seven times and has been willing to take these risks again and again because they might eventually help him own the future of computing.
Nvidia is suddenly at the core of the world’s most important technology. It owns 80% of the market for a particular kind of chip called a data-center accelerator, and the current wait time for one of its AI processors is eight months. Many big tech companies are on Nvidia’s backlog. But some of Huang’s biggest customers have been designing their own custom chips for years, aimed at reducing their dependence on suppliers such as Nvidia. For now, they’re hooked. “Nvidia has to stumble for some reason to give a competitor a chance,” says Chris Mack, an analyst at Harding Loevner LP, an investment company that owns about $160 million of Nvidia stock. “There’s no viable alternative.”
The thing that makes AI possible—the ChatGPT “poetry,” the software for cars that sort of drive themselves, the computer-generated photo of the pope in a puffy jacket—is the Ampere 100. Named after the 19th century French physicist André-Marie Ampère, the chip is about the size of a matchbook. Its surface appears smooth until viewed under a microscope, revealing some 54 billion teeny components arranged in what looks sort of like a map of the Tokyo subway system.
Nvidia’s chip architects spent four years refining a digital blueprint of the A100 before sending the design off to Taiwan Semiconductor Manufacturing Co. (TSMC) or Samsung Electronics Co. for production. When a prototype is ready, it’s flown to the US and then, like a VIP, chauffeured from the airport to Nvidia’s campus. There, it’s ushered to a windowless lab lined with screens and cooling pipes hanging from the ceiling. (Without adequate precautions, the chips can get so hot that they burst into flames.)
Engineers, whose job it is to bring these tiny firebugs to life, usually look as if they’re terrified to the point of nausea as they plug the prototype into a test rig. They pray it turns on and goes as fast as it’s supposed to. Any glitch might necessitate a silicon correction, or “re-spin,” which can take months and cost hundreds of millions of dollars in lost sales. Jonah Alben, Nvidia’s senior vice president for graphics-processor engineering, says there’s no moment of triumph, only a “declining sense of concern.”
Back when Huang founded Nvidia, concerns were only soaring. He was 30, had a master’s degree in electrical engineering from Stanford University and had worked at various chipmakers, including AMD. He decided to start a company with two fellow engineers in 1993 after recognizing the need for specialized processors to improve the video games he loved. “His excitement over Flight Simulator was palpable,” recalls board member Tench Coxe. But their initial chips, including one intended for the Sega Dreamcast game console, failed because they bet on a novel architecture that was unpopular with game developers. Nvidia was running out of cash (one of his near-bankruptcies), so Huang backed out of the Sega deal and abruptly changed course.
He instead focused on a new chip designed for computers running Microsoft Windows and signed on Dell and Gateway as customers. Nvidia turned a $4.1 million profit in fiscal 1998, a golden age for computer games that included the releases of Half-Life and StarCraft. The company went public the following year. “I’m told I’m the hardest CEO to kill,” Huang said at the time. By 2006, Nvidia had shipped 500 million graphics processors and had its technology integrated into the Sony PlayStation 3 and Microsoft’s first Xbox console.
For most of this time, Huang dressed sort of like a Best Buy employee—a “propeller-head,” as Apple Inc.’s then head of hardware engineering, Jon Rubinstein, describes him. Then one day he began wearing all-black shirts, pants and a leather jacket and seemingly never changed. He alternates between cerebral revelations and disarming humor in interviews and at public events, but at the office he can be a furious boss who’s prone to swearing, say three people who’ve been on the receiving end and asked not to be identified for fear of being sworn at again. One of them recalls how Huang, if he hasn’t heard the right answer, will demand—frequently between expletives—that an executive retrieve the subordinate who can provide it. Then he’ll wait, in a silent tantrum, checking his inbox until that person arrives or calls. Bob Sherbin, a spokesman for Nvidia, says retention among company leaders is high and they’re “fiercely loyal” to Huang. “They appreciate his humor and his passion for the company,” he says. “And they know that he’s hardest of all on himself.” Almost every employee is required to submit by email their “Top Five Things,” with that exact subject line, and many of them go straight to Huang. It should contain a concise summary of their pressing objectives, so he can keep track.
The top thing for Nvidia during most of its existence has been to not get destroyed by Intel. Gaming helped Nvidia carve out a niche for its graphics processing units, known as GPUs. But Intel’s central processing units, or CPUs, were for just about everything else. For decades, Intel was the world’s biggest chipmaker. Its CPUs have been in most computers dating to the 1980s and swallowed a ludicrous 99% share of the market for data-center processors. Intel’s chips could do games, too, but not as well as Nvidia’s.
Here’s the difference: Let’s say you’re going to the grocery store. Your shopping cart is the CPU. You walk the aisles, load up what you need and head to the register. It’s a perfectly normal way to buy your groceries. A GPU, however, is like hiring dozens of people with hand baskets. One gets your cereal, the other fruit, another toilet paper. Each shopper can’t carry as much as the cart, but you can probably guess which approach would win at Supermarket Sweep.
For almost the entire history of computers, this never really mattered, unless you were into video games or film editing. Nvidia’s GPU could perform the specific and repetitive tasks required to load millions of pixels at once for a game of Grand Theft Auto. Intel’s CPU, meanwhile, can bring up an Excel spreadsheet, run a web browser, play a YouTube video and so on.
The GPU way of doing things is known as parallel computing, and Huang thought it could have a profound impact on the most challenging technical problems. In theory, connecting more GPUs together could dramatically expand the amount of data a system could work through in any given time period. It could, he reasoned, address what he said was the end of Moore’s law. Conceived by Intel co-founder Gordon Moore in the 1960s, this law states that the number of transistors on a chip would double roughly every two years. That remarkably accurate forecast delivered massive increases in processor performance for a half-century, until things ground to a halt about a decade ago. Adding more Intel CPUs to data centers only metaphorically jammed up grocery aisles with shopping carts.
Customers began to look around for other options in the 2010s, creating an opening for Huang, whose GPUs operating in parallel could be the perfect substitute for all that data crunching. But a huge obstacle for Nvidia was that almost all the code running on servers at the time had been written for CPUs—for Intel. Fortunately for Nvidia, Huang had a solution that was just coming to fruition. In 2006 he’d rallied his company to construct a new programming language called Cuda, an acronym for “compute unified device architecture,” that could expand the types of software Nvidia’s processors could run.
This idea was rather nuts. The Cuda team had to re-create basic computational processes that have long existed for CPUs (mathematical libraries, debugging tools, etc.), which would enable developers to build software for a GPU’s parallel-processing capabilities. Huang soon mandated that all Nvidia’s new chip designs be made compatible with Cuda, at huge expense. He touted on earnings calls the number of universities that were teaching Cuda, to the confusion of financial analysts and even some employees who couldn’t grasp what all this had to do with gaming. “That was the cash cow,” says a former Nvidia vice president, who, like several others quoted in this story, asked to remain anonymous to avoid alienating Huang. “And the world was not going to run out of teenage boys playing video games.”
An early experiment with Cuda took place at the bottom of the ocean. WesternGeco, a subsidiary of the oil company Schlumberger NV, worked with Nvidia staffers to optimize an algorithm to electronically scan beneath the seafloor for signs of oil deposits, recalls a former high-level Nvidia engineer. “They had so much data, they’d use helicopters to transfer it from the ships to where they compute it,” this person says. “All that data needed to be processed and turned into ‘Drill here. Look here.’ Literally, $100 million decisions.” Using GPUs, initial tests of the resulting software were able to mine the data more than six times faster than the computers WesternGeco had used before.
Solving such a gnarly problem proved that Nvidia’s technology could do more than games, but it wasn’t until an even bigger breakthrough arrived at an academic competition in 2012 that its full potential became apparent. A project called AlexNet set records for its ability to accurately recognize the contents of images. Its 15.3% error rate was more than 10 percentage points better than the next-closest challenger. The neural network was trained with Cuda and two Nvidia GPUs. AlexNet demonstrated that AI powered by GPUs could perform some tasks at a level approaching human.
When Huang took the stage at Nvidia’s developer conference in 2014, an event billed as the “Woodstock for computational mathematicians,” he spent much of his keynote expounding on the future of AI. “People went there expecting to see explosions and physics simulations the way that you usually got in Jensen’s keynotes,” says Bryan Catanzaro, Nvidia’s vice president for applied deep-learning research. “It totally blew everyone’s mind.” Privately, Huang was saying his company would someday overtake Intel.
Those close to Huang say he has a remarkable ability to erase bad decisions from his company’s collective memory. This Men in Black maneuver helps his teams quickly move on to the next project. In “alignment” gatherings before audiences of as many as 400 employees, Huang asks general managers to present a business strategy as he watches from the front row and delivers a Simon Cowell-like assessment. His critiques can be vicious, according to three people who’ve attended these meetings. The public harangue, these people say, is intended not for the person onstage but for the hundreds behind Huang. They’re supposed to internalize his instructions and adjust their actions accordingly—a management style that’s kind of like parallel computing.
“Nobody really knows how the black box works, but it works on a lot of data, and every once in a while, you’ll get emotions out of it,” says a former longtime Nvidia executive who worked closely with Huang. “He’s almost the perfect AI.”
During the Covid-19 pandemic, when tech stocks were going wild, Nvidia crossed two milestones that would redefine the company. In July 2020, it was crowned America’s most valuable chipmaker. The next month, Nvidia said its quarterly revenue from data centers surpassed gaming for the first time. “I believed him 10 years ago when he said Nvidia would be bigger than Intel,” says Morris Chang, founder of the contract semiconductor manufacturer TSMC.
It wasn’t so much Huang’s proselytizing about AI that was resonating with Wall Street at this time. People were playing more video games and betting huge sums on Bitcoin and other digital currencies, driving demand for Nvidia GPUs, which excelled at crypto mining. Huang tried, unsuccessfully, to ride this momentum and buy chip designer Arm, responsible for the most widely used design standard in the semiconductor industry. The $40 billion bid would finally secure a place for Nvidia in mobile and expand its reach to many other kinds of products. But companies that relied on Arm’s chip designs were already wary of Nvidia’s growing power, and US regulators sued to block the merger. Huang conceded in February 2022.
All the while, AI remained a primary focus for Nvidia executives. The chief financial officer, Colette Kress, says shareholders struggled to understand the pitch. “ ‘You talk into your phone and ask where the nearest Starbucks is—that is AI,’ ” she recalls saying. “ ‘Behind the scenes, there’s this GPU working to solve that problem for you with data.’ I can’t even tell you how many times I’ve said that.” The conversations are easier today: “Super Simple: ChatGPT,” she says.
Ask Nvidia’s customers what it’s like to work with the company, and they’ll tell you it’s similar to dealing with Intel at its peak: no discounts, no negotiating, no skipping the line. Which explains why some of Nvidia’s biggest buyers are trying to create their own chips. None, though, have been able to match Nvidia’s package of chip design and sophisticated programming, which requires extensive and ongoing investment and expertise. “You wish a lot of the other vendors were at the same speed and execution and were creating markets and creating workloads like Nvidia is,” says Nafea Bshara, vice president of Amazon Web Services. “We’d all be in better shape.”
Musk tried to wean Tesla off Nvidia technology in 2018. He unveiled a Tesla-designed chip that eventually replaced Nvidia’s self-driving platform inside the company’s cars. “It’s strategic for them, building their own chip and sort of owning this end to end,” says Sarah Tariq, Nvidia’s vice president for autonomous-driving software. She says Tesla remains a big customer of Nvidia GPUs for data-center training. And Musk recently ordered thousands of Nvidia GPUs for another AI project, according to news reports. He’ll be lucky if he receives them before Labor Day (not because Huang holds a grudge but because no one gets special treatment). Musk didn’t respond to requests for comment.
Alphabet, Amazon and Microsoft have also invested billions of dollars in chip design. Google has made significant strides with its tensor processing units. Midjourney, the popular AI image-generator app, said in March it was adopting Google’s processors for model training alongside Nvidia GPUs. An analysis by New Street Research LLP found that Google’s chip delivers as much as six times the performance per dollar as Nvidia’s A100. But that comes with trade-offs—Google’s are less flexible in how they process data—and the advantage won’t necessarily hold for more than a year or two.
The successor to the A100—the Hopper 100, named after the pioneering programmer Grace Hopper—is now in production and already matches the performance of Google’s chip. Even the most powerful people in the industry are acting “very, very politely” toward Huang, according to Pierre Ferragu, an analyst at New Street. “Everybody is afraid of pissing off Nvidia.” (A Google spokesperson says that the company values its partnership with Nvidia and that its chips are complementary to GPUs.)
Huang demurs when asked about threats to his business. He bristles at complaints about the price of Nvidia GPUs and contends that a customer spends less to power his machines in the long run because they’re so efficient. “We are the save-you-money company,” he says. He refuses to talk about Musk and says he was unaware Midjourney’s allegiance was swaying. He says he doesn’t care if his customers become competitors and he’ll continue to treat Google as one of his best customers because it really is one of his best customers. (Alphabet Inc. is Nvidia’s third-largest client, according to data compiled by Bloomberg.) “We pretty much run away from competition,” Huang deadpans. “I’m a coward. I hate fighting for stuff.”
Huang says he wishes the US and China would stop fighting, too. Last August, Nvidia became a target of government limits on the spread of AI. The Biden administration now requires licenses to export Nvidia’s most advanced chips, including the A100 and H100, to China. So Nvidia quickly spun up a hobbled version of the A100 that won’t trigger the restrictions because it accesses data more slowly.
The US doesn’t want China to achieve parity in chipmaking; Huang argues that President Joe Biden’s restrictions will do the opposite. They incentivize China to foster a homegrown industry, and it already has more than 50 GPU companies, he says. Huang sets the stakes even higher and suggests the restrictions could trigger an international incident—specifically, an invasion of a nearby island where much of the world’s semiconductors, including Nvidia’s, are manufactured. “China is not going to sit back and be regulated,” Huang says. “You got to ask yourself, at what point do they just say, ‘F— it. Let’s go to Taiwan. We’ve got nothing to lose.’ At some point they will have nothing to lose.”
Huang sees the arrival of ChatGPT as the “iPhone moment” for AI. It’s already led to a resurgence of Microsoft’s Bing search engine, mesmerizing new text-to-image capabilities in Adobe Inc.’s Photoshop and stunning advances in medical research. Nvidia’s GPUs are, of course, the foundation for all these.
So Huang has been hopscotching across the planet, sermonizing his company’s role in the AI revolution at an endless series of conferences. He personally adjusts his presentation slides, making sure the photo angles of his GPUs look as striking as possible and meticulously arranging and resizing the logos of Nvidia customers. Lately, however, his slides have featured so many AI clients—Baidu, ExxonMobil, JPMorgan Chase, McDonald’s, Pfizer—that the logos are now tiny, almost indiscernible pixels on the screen.
On a recent sunny afternoon at Nvidia’s California headquarters, Huang staggers into a conference room named after Michael Crichton’s Westworld. That morning, Huang had flown to another tech conference—this one in Las Vegas—delivered a keynote, glad-handed with customers, did a television hit and zoomed back to Silicon Valley for this interview. Huang slumps onto a gray sofa. He has every right to be tired, but he appears to be feigning exhaustion as a gag.
Even at age 60, Huang hasn’t shown any signs of wanting to hand over the keys to the machine. “Our company was built so that I know how to run it,” he says. “So for as long as I’m running it, that’s all that matters.” (He once heckled a roomful of his peers at a gala, saying, “You guys know that all of you are serving a term. I’m serving life. When your children are running your companies, I’ll be here.”)
A few months ago, Huang hit his 30th year as head of Nvidia, making him the longest-tenured CEO in the semiconductor industry. But he says those in his orbit know he hates celebrations. They don’t even bring up his birthday. “The only email I got was an automatic email from the HR IT system that says, ‘Dear Jensen, you have an employee who reached a 30-year anniversary.’ And that employee’s name was me,” Huang says with self-satisfaction. “Not one other person said congratulations, happy 30th, nothing.”
Silicon Valley has a proud history of CEOs who terrify employees. But Huang is now leading one of the most important companies shaping the trajectory of AI, and a sizable portion of the population is scared of what AI can do. They want to know what the leaders of AI believe in. Are they ethical? Will their employees have the courage to raise objections? Can they be trusted?
When Huang dropped off that GPU supercomputer to OpenAI in 2016, he signed the box with a marker: “To Elon & the OpenAI team! To the future of computing and humanity.” Musk has since become perhaps the most vocal critic of AI, calling it a threat to society, and has said his split from OpenAI was on ethical grounds. His co-founder, Altman, has warned that AI poses a “risk of extinction” on par with nuclear war. Geoffrey Hinton, a pioneering AI researcher who contributed to the AlexNet breakthrough, has said AI represents a more urgent threat to humanity than climate change.
Yet, when repeatedly pressed on these concerns, Huang fires back, “I don’t care about Sam. I don’t care about what Elon said. I don’t care about what Hinton said. Just ask me.” Huang says Nvidia has software guardrails to keep AI confined to its assigned tasks. He tends to see things in techno-utopian terms.
Huang acknowledges that AI has the potential to do real harm but says it’s no different from the danger of “chemical warfare, fake news and so on.” He wants targeted government regulation—for surgical robots, for AI-assisted flying—but says the idea of a mandated pause on AI development is “silly” and the way to make AI safe is to advance AI. Huang says his two adult children have never expressed to him any anxieties about AI, only amazement of its potential. “We’re frightened about social media, but we’re not frightened by AI,” Huang says. (He clarifies a moment later that both his children work at Nvidia.)
A few weeks later, Huang flew to Taiwan, his birthplace, to give yet another speech on the future of AI. Onstage before a thunderous crowd, he revealed Nvidia’s latest AI supercomputer, a 55-foot-wide, 4-foot-deep system he described as “one giant GPU,” weighing 40,000 pounds. The machine can run so hot, he said, that it’s equipped with 2,000 fans capable, within minutes, of displacing all the air in the expansive auditorium he was in. Huang stepped under a life-size image of the machine displayed behind him to show its daunting scale; he compared it to four elephants. Uh, yeah, nothing scary about that. —With Debby Wu
(Visited 1 times, 1 visits today)