Nvidia and Intel will collaborate on custom data center and PC products

Nvidia and Intel announced a collaboration to jointly develop multiple generations of custom data center and PC products that accelerate applications and workloads across hyperscale, enterprise and consumer markets.

As part of the deal, Nvidia will invest $5 billion in Intel’s publicly traded stock.

It’s a historic collaboration between two arch competitors in the chip industry. And it breaks down one of the barriers in this sense: Nvidia has often focused on CPUs based on Arm ecosystem, as it never had a license to make its own x86 CPUs. This doesn’t grant that license, but now the companies will work closely together in the x86 space.

The companies will focus on seamlessly connecting Nvidia and Intel architectures using Nvidia NVLink — integrating the strengths of Nvidia’s AI and accelerated computing with Intel’s leading CPU technologies and x86 ecosystem to deliver cutting-edge solutions for customers.

For data centers, Intel will build Nvidia-custom x86 CPUs that Nvidia will integrate into its AI infrastructure platforms and offer to the market.

For personal computing, Intel will build and offer to the market x86 system-on-chips (SOCs) that integrate Nvidia RTX GPU chiplets. These new x86 RTX SOCs will power a wide range of PCs that demand integration of world-class CPUs and GPUs.

Nvidia will invest $5 billion in Intel’s common stock at a purchase price of $23.28 per share. The investment is subject to customary closing conditions, including required regulatory approvals.

“AI is powering a new industrial revolution and reinventing every layer of the computing stack — from silicon to systems to software. At the heart of this reinvention is NVIDIA’s CUDA architecture,” said Nvidia founder and CEO Jensen Huang, in a statement. “This historic collaboration tightly couples Nvidia’s AI and accelerated computing stack with Intel’s CPUs and the vast x86 ecosystem — a fusion of two world-class platforms. Together, we will expand our ecosystems and lay the foundation for the next era of computing.”

“Intel’s x86 architecture has been foundational to modern computing for decades — and we are innovating across our portfolio to enable the workloads of the future,” said Lip-Bu Tan, CEO of Intel, in a statement. “Intel’s leading data center and client computing platforms, combined with our process technology, manufacturing and advanced packaging capabilities, will complement Nvidia’s AI and accelerated computing leadership to enable new breakthroughs for the industry. We appreciate the confidence Jensen and the Nvidia team have placed in us with their investment and look forward to the work ahead as we innovate for customers and grow our business.”

It’s a big turnaround in tech as Intel used to be a much bigger company. But Nvidia’s stock has soared during the AI era, while Intel has been playing catch-up. Intel has been cutting tens of thousands of jobs, in part because Advanced Micro Devices has been coming up with better x86 processors in recent years. The result was turnover for a couple of CEOs for Intel, and Tan has come in as kind of turnaround specialist CEO for Intel. This kind of deal shows Tan is willing to make peace with old enemies to help Intel move forward.

The call with the two CEOs

Lip-Bu Tan (left), CEO of Intel, with Jensen Huang, CEO of Nvidia. Source: Nvidia/Intel

I listened to a call with Huang and Tan on it.

In the call, Huang started out by saying, “AI is driving a reinvention of every layer of the computing stack. Sixty years ago, IBM introduced the System 360, the first general-purpose computer. It launched the modern computing era, powered by Moore’s Law and CPUs programmed line by line by human hands. But general purpose computing has reached its limits. To keep advancing, we invented a new way forward. Nvidia pioneered GPU-accelerated computing, increasing performance by orders of magnitude — tens hundreds, thousands of times faster, while dramatically improving energy and cost efficiency.”

Huang added, “That innovation opened new frontiers in science and industry, and it sparked the Big Bang of artificial intelligence. Today, we’re taking the next great step. Just moments ago, Nvidia and Intel announced a historic partnership to jointly develop multiple generations of x86 CPUs for data centers and PC products.”

Huang said the collaboration will tightly couple and optimize Intel’s x86 CPUs for Nvidia’s AI and accelerated computing architecture.

“Together, our companies will build custom Intel x86 CPUs for Nvidia’s AI infrastructure platforms, our data center platforms, bringing x86 into Nvidia’s NVLink ecosystem. And for personal computing, we’re going to create new Intel x86 CPUs that integrate Nvidia GPU chiplets, fusing the world’s best CPU and GPU to redefine the PC experience. This partnership is a recognition that computing has fundamentally changed the era of accelerated and AI computing has arrived,” Huang said.

And he said in closing, “Today is a very exciting day and a very big day. Intel and Nvidia are partnering to drive it forward. I’m delighted to partner with Lip-Bu Tan, a longtime friend, and many of my colleagues at Intel in this great partnership, this historic partnership. And now let me turn it over to Lip-Bu to tell you about the exciting partnership we’re entering into.”

Tan thanked Huang and noted they have known each other for more than 30 years.

“I still remember that Jensen had the vision of building the system platform software and with the CUDA, and in the long-term vision that he had for the company. And I have to salute him. He has done the fabulous job of building that AI platform, driving the whole new market opportunity. I’m so excited to be able to work together with Jensen to build a new era. And this is a historical collaboration between the two companiesy.”

Tan said it was a “game-changing” opportunity to work together.

“I will say that we are proud that Nvidia is an investor in Intel, and thank you for [support], confidence and trust in us. I think this milestone [highlights] the critical role that Intel can play in the ecosystem, and I’m grateful for the confidence and that you place with us in terms of product perspective,” Tan said.

Tan noted that this collaboration is built on the core strengths of both companies, with Nvidia in AI accelerated computing and Intel in the data center and client PC CPUs.

“This collaboration brings all together for the best for the industry,” he said.

He also said the collaboration could “unleash the new era of x86 innovation. And x86 has a foundational role to play in the next era of computing. And I’m excited about what can be created together together” in combining Nvidia’s NVLink communications tech with x86 CPU technology, interconnnect and memory.

Tan said the solution of the Nvidia-Intel alliance is good for customers, as it will lead to better products.

Q&A session


Asked how this changes the industry landscape, Huang said it will affect supercomputers, where Nvidia provides AI processors
Yeah, thanks a lot, Jim. I’ll take, I’ll take the first shot at it, and you could help me out. So if you, if you look at, look at the the AI computing world. Let me take it in two segments. The AI computing were first. These are, these are supercomputers. And as you know, Jim, we recently introduced the scale up NVLink, 72 rack scale computers. That involves designing a custom CPU we call Vera that’s tightly integrated with the GPUs, Blackwell GPUs, so that we can disaggregate NVLink switches, scale it up into a rack scale system. Essentially have an entire rack behaving as if it’s one giant computer, one giant GPU. Well, in order to do that, we have to really customize the CPU to do that. And and so this architecture, the Envy link, 72 rack scale architecture, is only available for the Vera CPU that we build, the ARM CPU that we built, and for the x86 ecosystem, it’s really unavailable except with server CPUs over PCI Express, and that has limitations in how far you could scale these scale up systems. And so the first opportunity is that we can now, with Intel x86 CPU, integrate it directly into NVLink ecosystem and create these rack scale AI supercomputers. The second thing is, there’s 150 million laptops sold per year, and Nvidia’s market is largely target, targets squarely at gaming and workstation markets, where discrete GPUs are used. And we’re very successful there, and we’re continuing. We continue to grow there, and we’re going to continue to grow there. There’s an entire segment of the market where, where the CPU and the GP are integrated, and it’s integrated for form factor reasons, maybe it’s for cost reasons, maybe it’s for battery life reasons, all kinds of different reasons, and that segment has been largely unaddressed by NVIDIA today. And so what the intel team and I are doing, Nvidia is doing is that we’re creating an SOC that fuses two processors. It fuses the CPU and Nvidia’s GPU, RTX, GPU, using NV link, and it fuses these two dies into one, essentially virtual giant SOC, and that would become essentially a new class of integrated graphics laptops that the world’s never seen before. That entire you know, that segment of the market is really quite rich, and it’s really quite large, and it’s underserved today. You know, with with a state of the art world class GPUs like Nvidia is able
to build, depending if I can just add on to it. And I think clearly, it’s all about the scale, and in terms of the best of, you know, GPU accelerator, and then the best of 86 and then with the NV link, linked together, and then able to scale. And some of the market that we both are doing well, but we can expand even more in terms of some of the application solution, the vertical market we can go after,
if I may just follow up for a second. Jensen, will you be using Intel’s boundaries to make high end chips like a great platform, more importantly, a Vera Rubin are the kind of essence in class semiconductors that right now you have made with Taiwan.
Semi Well, we’ve always, we’ve always evaluated Intel’s foundry technology, and we’re going to continue to do that. But today, this, this announcement, is squarely focused on these custom CPUs. You know, with with this partnership, With this agreement, we’re essentially going to be a major customer of Intel server. CPUs. This is the very first time, you know, at the moment, we buy, we buy CPUs, ARM based CPUs from TSMC and the x86 based PCI Express CPUs are sold openly in the marketplace. In the future, we will buy x86 CPUs from Intel, and we would fuse it with NVLink into our rack scale system. So we’re going to become a very large customer of Intel CPUs. The second thing is that we’re going to be quite a large supplier of GPU chiplets into x86 Intel x86 CPU so sees and so in that particular case, we’re going to be a supplier into a market segment we’ve never addressed in the server. We’re going to be a customer, a major customer, of Intel’s and well integrated into envy link 72 and resell, resell CPUs, essentially. And so, so this is going to be a great growth business opportunity for both of us. Thank you.
Your next question comes from Ian king of Bloomberg. Your line is open, gentlemen.
Thank you very much for doing this, and this is an interesting times we’re living in wondered if you could talk a little bit about how long you’ve been working on this agreement, and then obviously you talked a lot about custom solutions and custom parts. Those don’t happen overnight. When might we get to the point where we’re seeing devices in the end market for sale, please?
The two technology teams have been discussing and architecting solutions now for probably coming out to a year. And the two architecture teams, well, it’s three architecture teams are working across, working across, of course, the GP, the CPU architecture, as well as product lines for server and PCs and so the the architecture work is fairly extensive, and and the teams are really excited about the new architecture. And so, so the teams have been working a while, and we’re excited about the announcement today.
You Your next question comes from the line of Michael Acton with financial time, your line is open.
Hi, both, Dean. Why can you commit to Intel as a foundry for your most advanced AI chips at this point? And this is paved the way for sort of deeper manufacturing collaboration or not. Are you confident Intel’s going to get there? And then secondly, I’d be interested, what sort of involvement did the Trump administration have in this agreement?
If any, the Trump administration had had no involvement in this partnership at all. And they would have been very supportive, of course. And today I had the opportunity to tell Secretary lutneck, and he was very excited, very supportive of seeing American technology companies working together. I think, I think Lippo and I would both, both say that TSMC is a world class foundry, and in fact, we’re both very successful customers of tsmcs and and the capabilities of TSMC from process technology, their Rhythm of execution, the scale of their capacity and infrastructure, the agility of their business operations, just, just all of the magic that comes together for being a world class foundry supporting customers of such diverse needs is really quite extraordinary, and I just can’t, I can’t, you just can’t overstate the the magic that that is TSMC. But today, our our conversation today, our partnership today, is completely focused. It is 100% focused on the custom CPUs that we’re building for the data center that now has NVLink capabilities and can connect to the NV link and the NVIDIA AI supercomputing ecosystem, and it’s completely focused on mobile SOCs or consumer PCs that now views the world’s best CPU and the world’s best GPU for consumer products. That segment, you know, the first segment, of course, the data center CPU segment. You know, it’s probably something along the lines of $30 billion a year or so, in the case of and this is. Going to this, going to a this pro, this combination of their intel and our technology is going to address a fairly significant swath of that, because it is the fastest growing segment. And we all can can agree that the future computing is going to be about AI through and through and so this is, this is an exciting, exciting partnership for for the data center market, and then for the consumer markets. 150 million laptops sold each year, and we’re now going to combine the best CPU and the best GPU. And so it’s really, really exciting.
Lipo, yeah, I think you know, clearly, this is a historical and you know, this is also my six month, you know, the year old for Intel, CEO. And from day one, you know, Jensen and I, we work on it, and then we accelerate that process. And then two team work together to this game changing opportunity. Is a deep partnership, and we’re looking forward for multiple way we work together.
Next question comes from Laura Bravin with Yahoo Finance. Your line is open. Hi. Thanks so much for taking my question. I’m really curious about which manufacturing process this, these new CPUs will be made on. I know that, you know, lip duty said that, you know, 14 A is only going to go forward if it has meaningful volume or customer commitment. I just wonder, can you comment about, yeah, which manufacturing process the CPU is going to be made on? And I know you said this is strictly a product announcement, but can you all comment at all on whether this might pave the way for Nvidia to collaborate with Intel foundry services in the future? And that’s all,
yeah, I think this announcement is more on the product collaborations. And clearly, like Jensen mentioned, you know, the TSMC have been a great partner, longtime partner for Nvidia and also for Intel. So we’re going to continue doing that. And I think that’s kind of in terms of process. I think it later on, we can describe more, but I think right now, we are focused on collaborations, and then certain date, then we can have more announcement down the road, when the product is ready.
Yeah, I think, I think it’s safe to say that that the partnership that we’re entering into is going to address some, you know, 25, $50 billion of annual opportunity. And so this is a very significant partnership, and we’re completely focused on that with one of the things that I will say is that our arm roadmap is going to continue, and we’re committed. We’re we’re fully committed to the arm roadmap. We have lots and lots of customers for arm. We’re building the next generation of Vera. The next generation of gray is called Vera, and we have a next generation after that. We have exciting CPUs that we’re building based on arm. We’re building arm, of course, robotics processors. Our latest one is called Thor. It’s used for robots, and, of course, for autonomous driving. We also have a new arm product that’s called n1 and that that product is that processor is going to go into the DG X spark and many other versions of products like that. And so we’re super excited about the arm roadmap and, and this doesn’t affect any of that Nvidia’s architecture accelerated computing covers just about every CPU architecture and and our our most important interest is for whatever, whatever general purpose computing platform that has market reach, we would like to be able to accelerate it to its fullest capability. And so today, we have the benefit of partnering with Intel on his on a CPU platform that unquestionably has the largest enterprise, industrial, industrial space, cloud consumer footprint of any CPU in the world. And so really exciting partnership, and we’re going to revolutionize this general purpose computing platform by adding and fusing the NV accelerated computing and ai ai computing architectures.
The next question comes from Steven Nellis with Reuters news, your line is open.
Thank you, gentlemen for taking my question. A question for Jensen. And Jensen, why did you feel it was appropriate or necessary to also make an equity investment in Intel? Along with this product collaboration and with who this is now, sort of a string of equity investments we’ve seen from folks. Can we expect other ones from potential partners or foundry customers in the future?
I appreciate that question because we thought it was going to be such an incredible investment. This is a big partnership, and we think it’s going to be fantastic for Intel. It’s going to be fantastic for us. And we’re, we’re building revolutionary products that’s going to address some $50 billion annual market. And so how, how could we, on the one hand, be excited about the products and how revolutionary they are. On the other hand, not be excited about the opportunities ahead and so so we, we were delighted to be a shareholder, and we’re delighted to have invested in Intel, and the return on that investment is going to be fantastic, both, of course, in our own in our own business, but also in in our equity share of Intel, and I think it’s going to be fantastic for Intel. It just reflects how excited we are about about this partnership.
Thank you, Jensen, I think answer your question. I think the clearly, as I mentioned, my top priority, top 10 priority, one of them, is to strengthen my balance sheets, and as I’m focused on that. And then secondly, you know, in this particular situation, I think, first of all, I’d like to thank Jensen for the confidence in me, and then our team and Intel will work really hard to make sure it’s a good return for you. And more important, it’s a strategic partnership to drive the products and go to market together to win so that, I think it’s a very meaningful for us. Thank you.
The next question comes from Robbie Wheelan with WSJ, your line is open. Hi.
Thanks to both of you for the announcement. Congratulations. There’s a lot of questions about whether or not you video will someday use Intel as its foundry, foundry partner for its most advanced AI chip. But with these CPUs that we’re talking about under this partnership, will TSMC be fabricating most of those CPUs for the for their foreseeable future, and then also, just really quickly, what’s the bigger addressable market under this partnership, data centers versus PCs and edge computing. In other words, do you especially making more CPUs for PCs under this new arrangement, or more CPUs for data centers?
Who you want to take the first
part? Sure. So I think in terms of the as I mean we mentioned earlier, this is more the product of collaboration. Announcement. We both side a lot of respect for TSMC, CC, way Morris Chang, and we continue to work with them. In term of the Intel foundry, we continue to make progress. And then in term of the yield performance, 18 a, 4014, a, clearly we want to qualify. And then, you know, we’re going to decide whether this is right one for doing our foundry. So I think, you know, we continue to improve at the right time, Jen and I will review that. But overall, I think we can look we’re going to continue driving our success on the process side, and then win the customer confident and trust, and then one step at a time.
Yeah, I think one of the, one of the things liquid that I would, I would also add, is, is that Intel has a has the fibros Multi technology packaging capability, and it’s really enabling here. And the reason for that is because, as as we all know, Nvidia’s GPU technology is based on TSMC foundry, and this, this is one of the extraordinary things that you can do, connecting Nvidia’s GPU dialect chiplet with Intel’s CPUs in a multi technology packaging capability and multi process packaging technology, and so it’s really, it’s really a fabulous way of of mixing and matching technology and and that’s one of the, one of the one of the reasons why we’re going to be able to innovate so quickly and build these incredibly complex systems and deliver it as multi, multi chiplet systems packages and so, so I’m really excited about that. Yeah, in terms of the size of, oh, go ahead,
yeah, thank you. I think Jensen, you brought up a good point about our advanced packaging for us. And also the emip is a really good technology, and we will definitely continue to refine it and make sure that’s reliable and the yield improvement and then so I think that part definitely will explore the collaboration opportunity.
Yeah, with respect to the size of the market, the data center market and the PC market are both large. Yeah. And we’re going to build revolutionary products, first of its kind products, nothing of its kind has ever been built before for the x86 market and and so I think the the if my my recollection is correct, the data center CPU market is about $25 billion or so annually, and just a notebook market is 150 million notebooks sold each year. And so that kind of gives you a sense of the scale of the work that we’re going to do here. We’re going to address the in terms of the consumer market, when addressed a vast majority of that of that consumer PC market, consumer PC notebook market, and with respect to the data center, where we’re going to bring NV link, which is the scale up interconnect, the fabric of Nvidia, the computing fabric of our company, we’re going to bring that capability to Intel and so, so I think these are going to be revolutionary products, and we’re looking the I know that that All of us working on it are super excited about it, the architects working are super excited about it, so we’re looking forward to telling you more about it
over time. The next question comes from Edward Ludlow with Bloomberg. Your line is open.
Jensen, lipbu, thank you very much for your time and willingness to answer questions. Jenson, I appreciate you talking a lot about the addressable market. Could you explain how Nvidia participant participates in the economics of x86 because you make money on ARM based CPU, right? So if you could explain how it will work for Nvidia on the top and bottom line, I’d be grateful and lit boo, congratulations. You’ve been in Silicon Valley, so to speak, for a long time. Right? If you stand on the Intel roof, you can look across the freeway at Nvidia. Would you just explain the culture and sentiment inside Intel today in reaction to this new partnership and what it means for the trajectory of you and your and your staff.
Thanks. In the case of the data center to server CPU, it’s like us buying Grace CPU from TSMC, integrating it into our rack scale systems and selling that. It’s basically the same idea. So we’re now, instead of 4x 86 we don’t buy any CPUs. We let the market sort it out. And and the CPUs are are really sold as discrete servers, separate servers that are then then connected with our GPUs in a data center, and that architecture basically using PCI Express re timers and things like that. Basically, PCI Express re timers and repeaters, essentially, instead of building servers like that that really don’t have the ability to scale up to NVLink, 72 large fabric systems. We now are able to do that with Intel x86 CPUs. And so we’ll buy those CPUs from from Intel, and then we’ll connect it into Superchips that then becomes our compute nodes, that then gets integrated into a rack scale AI supercomputer. In the case of our consumer PC, we will sell the current the current idea is to sell Nvidia’s GPU chiplet either in a pass through way with Intel, or sold to Intel, and that is then passed, packaged into an SOC, and so, so we buy a server die, server chip on the one hand, I guess, server chip on the one hand, we sell a GPU chiplet On the other hand. And in both cases, it expands the market for Intel very significantly, and expands the market for Nvidia as well.
And then, in terms of your question about the culture changes at Intel, and first of all, this is a new Intel culture I’m trying to build, and it’s going to be engineering focus. And, you know, extending my relationship with Jensen from Cadence, Nvidia partnership in terms of dry innovation, and now we are so excited to have this partnership to collaborating to the engineering on 86 and also the GPU accelerator and on the AI side, and with NV link. So I think there’s a lot of engineering collaboration together. The culture I want to have is really lean, fast moving and so that we can match up with Jensen fast moving culture. So I think that’s something that I’m looking forward to build that culture that can match each other to drive the best solution for the market. Thank.
Be nice once again, if you have a question, it is star one on your telephone keypad. Your next question comes from Christina parsonnes with CNBC. Your line is open.
Thank you both for setting this up and taking my question. Justin, you spoke about the arm relationship, which is great, so it’s continuing, and maybe the reaction today was a little overdone, but given Softbank position across arm Intel and now your partnership with Intel is there, just like some type of broader coordination that I’m missing here, then I just have a follow on China.
Yeah, this today should have no impact on arm. And with respect to the second question, not that I’m aware of, there were no communications with anybody else, except for between lib myself and the technical teams that were working on, on on this partnership, and we kept it really quiet, you know, obviously, obviously, it’s a very substantial partnership. This is going to expand the market opportunity for Intel in in AI infrastructure that is largely unexposed to them today, and it’s going to expose to Intel in the consumer notebook market, where really exquisite GPUs are necessary. And so these two markets are unexposed to Intel today, and it’s going to be brand new growth markets for Intel. And so, I think, and all the due diligence that we’ve between our two teams, and all the work that we did gave us a lot of confidence about the future of Intel. And so, you know, we’re we’re really betting on, well, I become quite a significant shareholder, because we believe in this, and we have confidence in them to create, to partner with us, to create these amazing products. But all of these discussions were, were has no, no relationship to any of the things that you were talking about.
And just a quick follow up Intel, definitely, we know faces different regulatory constraints than you do. And all the stories that are coming out of China is just, you know, constantly on CNBC. We’re talking about all the time. But is that part of the calculus here that you know, Intel difference in regulatory constraints would help you in the medium to long term?
I don’t think there’s any any relationship there either. I don’t think there’ll
be any impact either way. Thank
you all. Thank you. The next question comes from Anissa gardashi, with the information your line is open. Hey.
Thank you too. So much for the time and ask the question. My question for Jensen, I was curious, what types of Nvidia customers are interested in the x86 architecture for the CPU, and do you expect any any customers that currently use the ARM CPU to switch to x86 in the future?
Arm arm in in the world’s CSPs is growing, but the vast majority of the world’s CSPs are still eggs 86 the vast majority of cloud instances for enterprise users are still x86 and so so I I think the x86 footprint is still quite large and and Nvidia addresses it in one of two ways. You know, in the case of arm, we could scale up to a full rack, rack scale, NV link system. In the case of x86 we address it through external PCI Express re timers, and we scale up to NV link eight. And so in the case of x86 we scale up to NVLink eight. And in the case of arm, we scale up to NVLink 72 and so now we could, with x86 scale up also to envy link 72 and so I think, I think this, this is, this is a really great growth opportunity for for both of us. And it it also creates a product that that for for many customers who are still x86 based, and basically the vast majority of the world’s enterprise is still x86 based, they now have state of the art AI infrastructure. The
next question comes from Eva. Do with Washington Post, your line is open.
Thank you for holding this
President Trump manufacturing in the United States is such a big push for him, and both your companies have committed to. A multi billion dollar investment in this area. Could you talk a bit about what are, what are realistically the prospects of manufacturing your your chips in America, like, what proportion of your chips do you expect to be made in the US in the near future, and what are sort of the challenges to doing more of the production here. Thank you.
Lipu, would you like to go first
clearly, you know, we clearly we like Trump. President Trump, focus on manufacturing in us. But, you know, I think it’s important to to address that, and then the opportunity we have in front of us. Meanwhile, we also have the footprint from Intel globally. And so in the way, we just meet customer requirement, include the Nvidia and then so that they have the flexibility which best suitable for them. And then meanwhile, we continue to improve our year of performance, and also the other part is the advanced packaging we just talked about. I think it’s a great opportunity for both of us.
This concludes the question and answer session. I’ll turn the call to Marlene for closing remarks.
I want to thank all of you for joining us today in this historic day. It’s a historic partnership. I want to thank lipu for his leadership and the management team of intel that we’ve had the great privilege of working with, architecting two really exciting product lines and architecting this partnership, we were we’re going to go in and address a new computing era where accelerated computing and AI are are essential to every aspect of computing, whether it’s in in the data center, in the cloud or in mobile devices and personal computers. I’m super excited to start the projects and our partnership. I want to thank lipo again and the intel team for this exciting announcement and the excite and this exciting new partnership.
LiPo, Yep, thank you. First of all, I want to thank Jensen and Nvidia for the trust and support Intel, and we will work hard to make sure that this will be a great success for both of us. I think more exciting for me is the collaboration, the best of the acceleration AI, and also the 86 and then using the NV link to scale. And I think this is a new compute platform that we are moving forward, and I’m super excited about the opportunity in front of us, and a lot of execution we’re going to be doing that, and then stay tuned. We’re going to update you at the time come. But I just want to thank all of you for attending this announcement together.