Placeholder Image

Subtitles section Play video

  • As we've launched the Core Ultra with Meteor Lake, it also introduced this next generation of chiplet-based design.

  • And Lunar Lake is the next step forward.

  • And I'm happy to announce it today.

  • Lunar Lake is a revolutionary design.

  • It's new IP blocks for CPU, GPU, and NPU.

  • It'll power the largest number of next-gen AI PCs in the industry.

  • We already have over 80 designs with 20 OEMs that will start shipping in volume in Q3.

  • First, it starts with a great CPU.

  • And with that, this is our next generation LionCove processor that has significant IPC improvements and delivers that performance while also delivering dramatic power efficiency gains as well.

  • So it's delivering Core Ultra performance at nearly half the power that we had in Meteor Lake, which was already a great chip.

  • The GPU is also a huge step forward.

  • It's based on our next generation C2 IP.

  • And it delivers 50% more graphics performance.

  • And literally, we've taken a discrete graphics card and we've shoved it into this amazing chip called Lunar Lake.

  • Alongside this, we're delivering strong AI compute performance with our enhanced NPU, up to 48 tops of performance.

  • And as you heard Satya talk about, our collaboration with Microsoft and Copilot Plus and along with 300 other ISVs, incredible software support, more applications than anyone else.

  • Now, some say that the NPU is the only thing that you need.

  • And simply put, that's not true.

  • And now having engaged with hundreds of ISVs, most of them are taking advantage of CPU, GPU, and NPU performance.

  • In fact, our new C2 GPU is an incredible on-device AI performance engine.

  • Only 30% of the ISVs we've engaged with are only using the NPU.

  • The GPU and the CPU in combination deliver extraordinary performance.

  • The GPU, 67 tops with our XMS performance, 3 and 1 1x the gains over prior generation.

  • And since there's been some talk about this other XElite chip coming out and its superiority to the x86,

  • I just want to put that to bed right now.

  • Ain't true.

  • Lunar Lake running in our labs today outperforms the XElite on the CPU, on the GPU, and on AI performance, delivering a stunning 120 tops of total platform performance.

  • And it's compatible.

  • So you don't need any of those compatibility issues.

  • This is x86 at its finest.

  • Every enterprise, every customer, every historical driver and capability simply works.

  • This is a no-brainer.

  • Everyone should upgrade.

  • And the final nail in the coffin of this discussion is some say the x86 can't win on power efficiency.

  • Lunar Lake busts this myth as well.

  • This radical new SoC architecture and design delivers unprecedented power efficiency, up to 40% lower SoC performance than Meteor Lake, which was already very good.

  • Customers are looking for high-performance, cost effective, gen AI training and inferencing solutions.

  • And they've started to turn to alternatives like Gaudi.

  • They want choice.

  • They want open, open software and hardware solutions and time-to-market solutions at dramatically lower TCOs.

  • And that's why we're seeing customers like Naver, Airtel, Bosch, Infosys, and Seeker turning to Gaudi too.

  • And we're putting these pieces together.

  • We're standardizing through the open source community and the Linux Foundation.

  • We've created the open platform for enterprise AI to make Xeon and Gaudi a standardized AI solution for workloads like RAG.

  • So let me start with maybe a quick medical query.

  • So this is Xeon and Gaudi working together on a medical query.

  • So it's a lot of private, confidential, on-prem data being combined with a open source LLM.

  • Exactly.

  • OK, very cool.

  • All right, so let's see what our LLM has to say.

  • So you can see a typical LLM, we're getting the text answer here, standard, but it's a multimodal LLM.

  • So we also have this great visual here of the chest X-ray.

  • I'm not good at reading X-rays, so what does this say?

  • I'm not great either.

  • But the nice thing about, and I'm going to spare you my typing skills,

  • I'm going to do a little cut and pasting here.

  • The nice thing about this multimodal LLM is we can actually ask it questions to further illustrate what's going on here.

  • So this LLM is actually going to analyze this image and tell us a little bit more about this hazy opacity, such as it is.

  • You can see here it's saying it's down here in the lower left.

  • So once again, just a great example of multimodal LLM.

  • And as you see, Gaudi is not just winning on price, it's also delivering incredible TCO and incredible performance.

  • And that performance is only getting better with Gaudi 3.

  • Gaudi 3 architecture is the only MLPerf benchmark alternative to H100s for LLM training and inferencing, and Gaudi 3 only makes it stronger.

  • We're projected to deliver 40% faster time to train than H100s, and 1.5x versus H200s, and faster inferencing than H100s, and delivering that 2.3x performance per dollar in throughput versus H100s.

  • And in training, Gaudi 3 is expected to deliver 2x the performance per dollar.

  • And this idea is simply music to our customers' ears.

  • Spend less and get more.

  • It's highly scalable, uses open industry standards like Ethernet, which we'll talk more about in a second.

  • We're also supporting all of the expected open source frameworks like PyTorch, VLLM.

  • And hundreds of thousands of models are now available on Hugging Face for Gaudi.

  • And with our developer cloud, you can experience Gaudi capabilities firsthand, easily accessible, and readily available.

  • But of course, with this, the entire ecosystem is lining up behind Gaudi 3.

  • And it's my pleasure today to show you the wall of Gaudi 3.

  • Today, we're launching Xeon 6 with eCores.

  • And we see this as an essential upgrade for the modern data center, a high core count, high density, exceptional performance per watt.

  • It's also important to note that this is our first product on Intel 3.

  • And Intel 3 is the third of our five nodes in four years as we continue our march back to process technology, competitiveness, and leadership next year.

  • I'd like you to fill this rack with the equivalent compute capability of the Gen 2 using Gen 6, OK?

  • Give me a minute or two.

  • I'll make it happen.

  • OK, get with it.

  • Come on.

  • Hop to it, buddy.

  • And it's important to think about the data centers.

  • Every data center provider I know today is being crushed by how they upgrade, how they expand their footprint and the space, the flexibility.

  • For high performance computing, they have more demands for AI in the data center.

  • And having a processor with 144 cores versus 28 cores for Gen 2 gives them the ability to both condense as well as to attack these new workloads as well with performance and efficiency that was never seen before.

  • So Chuck, are you done?

  • I'm done.

  • I wanted a few more reps, but you said equivalent.

  • You can put a little bit more in there.

  • OK, so let me get it.

  • That rack has become this.

  • And what you just saw was eCores delivering this distinct advantage for cloud native and hyperscale workloads, 4.2x in media transcode, 2.6x performance per watt.

  • And from a sustainability perspective, this is just game changing.

  • You know, a three to one rack consolidation over a four year cycle, just one 200 rack data center would save 80k megawatts per megawatt hours of energy.

  • And Xeon is everywhere.

  • So imagine the benefits that this could have across the thousands and tens of thousands of data centers.

  • In fact, if just 500 data centers were upgraded with what we just saw, this would power almost 1.4 million Taiwan households for a year, 3.7 million cars off the road for a year, or power Taipei 101 for 500 years.

  • And by the way, this will only get better.

  • And if 144 cores is good, well, let's put two of them together and let's have 288 cores.

  • So later this year, we'll be bringing the second generation of our Xeon 6 with eCores, a whopping 288 cores.

  • And this will enable a stunning six to one consolidation ratio, better claim than anything we've seen in the industry.

As we've launched the Core Ultra with Meteor Lake, it also introduced this next generation of chiplet-based design.

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it