
Speaker 1: When we launched Core Ultra with Meteor Lake, we also introduced this next generation chip-based design. Now Lunar Lake is the next step, and we're excited to announce it today. Lunar Lake is a game-changing design. It's new IP blocks for the CPU, GPU, and NPU. It's going to power the most next-generation AI PCs in the industry. We already have over 80 designs across 20 OEMs that are shipping in volume. [00:00:30] We're launching it in Q3. We start with a great CPU. And this is our next generation Lion Cove processor, which has a massive IPC improvement and while it delivers that performance, it also delivers dramatic power efficiency improvements. So it's delivering super high core performance at almost half the power of Meteor Lake, which was already a great chip. And the GPU is also huge. [00:01:00] We've taken a step forward. This is based on our next-generation SHE 2 IP, which provides 50% more graphics performance. Speaker 1: We've literally packed a separate graphics card into this amazing chip called Lunar Lake. On top of this, we're delivering up to 48 times the AI computing performance with our enhanced NPU. As Satya mentioned about our collaboration with Microsoft and Copilot Plus, [00:01:30] And with 300 ISVs, we have more applications and amazing software support than anyone else. Some people say you just need an NPU, but simply put, that's not true. And after working with hundreds of ISVs, most of them are leveraging the CPU GPU U and NPU performance. In fact, our new SHE 2 GPUs are amazing for on-device AI performance. [00:02:00] It's the engine. Only 30% of the ISVs we've worked with are using only the NPU. The combination of the GPU and CPU delivers extraordinary performance. The GPU 67 is the top performing XMS with a 3.5x performance increase over the previous generation. Speaker 1: There's talk of this other elite chip coming out and being better than X86, but I want to wrap that up for now. [00:02:30] Not true. Lunar Lake is running in our labs today, with ex elite CPU, GPU and AI performance, a staggering 120x performance across the entire platform, and it's compatible. So there are no compatibility issues. This is the pinnacle of X86. Every company, every customer, every previous driver and feature works with ease. It's a no-brainer. Everyone should upgrade. [00:03:00] And the clincher in this debate is the belief that X86 can't win in power efficiency. Lunar Lake shatters this myth as well. This revolutionary new SOC architecture and design delivers unprecedented power efficiency with up to 40% lower SOC performance than already very capable mely. Customers are demanding high performance, cost-effective gen AI training and inference solutions. [00:03:30] And they've started looking at alternatives like Gowdy. They want choice, they want open software and hardware solutions and time to market with significantly reduced TCO. That's why customers like Navar, Airtel, Bosch Emphasis, Seeker, and others are turning to Gowdy two. And we're putting these pieces together and standardizing through the open source community. [00:04:00] And the Linux Foundation. We created an open platform for enterprise AI to make Xeon and GDI a standardized AI solution for workloads like lag. Speaker 2: So let's start with a simple medical query. Speaker 1: So this is Zon and Gaudi working together on medical queries. So, a lot of private, sensitive on-premise data is being combined with open source LLM. That's right. Very cool. Speaker 2: So let's see what the LLM has to say. [00:04:30] You see a typical LLM. Here you get a standard text answer, but this is a multimodal LLM. It also has a nice image of a chest x-ray. Speaker 1: Okay, I'm not good at reading, so what does this say? Speaker 2: I'm not good either, but I'm going to sacrifice my typing skills and cut and paste a little bit here. The great thing about this multimodal LLM is that you can actually ask questions to further explain what's going on here. This LLM actually analyzes this image. [00:05:00] Tell us a bit more about this blurry opacity, which you can see on the bottom left. Again, a great example of multimodal LEM. Speaker 1: As you can see, gouty is not only better on price, it's better on TCO and performance. This performance will be further improved in GOUTY 3. The GOUTY 3 architecture is the only ML performance benchmark that can replace H-100 in LLM training and inference. [00:05:30] Gouty 3 makes it even more powerful. It is projected to deliver 40% faster training time than H100, 1.5x faster inference than H200, 2.3x performance/dollar than H100, and 2.3x faster performance/dollar than H100. Gouty 3 is expected to deliver 2x performance/dollar than H100 in training. The idea is music to our customers' ears. [00:06:00] The ear gets more for less. It's highly scalable and uses open industry standards such as Ethernet, which we'll explain in more detail later. It also supports all the expected open source frameworks such as PyTorch, VLLM, and hundreds of thousands of models are now available for Gaudi. And with the developer cloud, you can experience Gaudi's capabilities first-hand, with easy access and immediate use. Of course, [00:06:30] The whole ecosystem is lining up behind the Gouty 3. I'm honored to show you the Gouty 3 wall today. Speaker 1: Today we're releasing Zion 6 with Ecos. We think this is a significant upgrade. [00:07:00] Modern data centers need high core counts, high density, and outstanding performance per watt. This is also our first product with Intel 3, our third of five nodes in four years. As we continue our march towards process technology, competitiveness, and leadership next year, we look forward to having you fill this rack with computing power on par with Gen 2 with our 6th Gen. [00:07:30] Speaker 3: Give me a minute or two and I'll make it in time. Speaker 1: Go ahead. Go ahead and get started. It's also important to think about the data center. Every data center provider that I know is blown away by how they can upgrade, how they can expand their footprint and space, the flexibility of high performance computing. There's a growing demand for AI in the data center. And they're putting 144 core processors versus 28 cores in the second generation. [00:08:00] It also gives you the ability to compress and attack new workloads with performance and efficiency that you've never seen before. Chuck, are you done? Speaker 3: I'm done. I wanted to repeat that a little bit more, but you said parity. Speaker 1: I can add a little bit more. Okay. So let me understand. Here's what the rack looks like. And what you just saw is that eCourse is delivering this clear advantage for cloud-native and hyperscale workloads. 4.2x for media transcode 2.6. [00:08:30] Performance per watt is x. From a sustainability perspective, this is truly a game changer. Consolidation from 3 racks to 1 rack in a 4 year cycle. Just one 200 rack data center saves 80,000 megawatts of energy per megawatt hour. Zana is everywhere. Imagine the benefits this could have across thousands, tens of thousands of data centers. In fact, even just 500 racks, [00:09:00] If the data center is upgraded as we have seen, it can power 1.4 million Taiwanese homes for a year, take 3.7 million cars off the road for a year, and power Taipei 101 for 500 years. And by the way, it will get even better. If 144 cores is good enough, let's put two together for 288 cores. So what happens next? [00:09:30] This year, we are releasing the second generation of Zon 6 with E-Course, which is a whopping 288 cores. This gives us an incredible 6:1 integration ratio, which is a better claim than we have ever seen in the industry.