📢 转载信息
原文链接:https://stratechery.com/2026/an-interview-with-nvidia-ceo-jensen-huang-about-accelerated-computing/
原文作者:Ben Thompson
An Interview with Nvidia CEO Jensen Huang About Accelerated Computing
Jensen Huang, welcome back to Stratechery.
JH: It's great to be with you.
You literally just walked off the stage, went a little long, I think, but you spent a lot of this keynote, which I quite enjoyed, explaining what Nvidia is, starting with the history of the programmable shader, the launch of CUDA 20 years ago. We don’t need to spend too much time recounting this, you did a good job, and Stratechery readers are certainly familiar — sorry, this is a bit of a lead up here — Stratechery readers are familiar, and I remember this exactly, someone asked me to explain how is it that Nvidia can announce so many things at a single GTC, this is like six, seven years ago, maybe even longer than that, and I explained the whole thing with CUDA and all the libraries is it's just sort of doing the same thing again and again, but for specific industries. That’s the story you told today, and it’s kind of a back-to-the-future moment after the last few GTC keynotes have kind of just been pretty AI-centered, CES was pretty AI-centered. Why did you feel the need tell that story now? To recast CUDA and why is it important?
JH: Well, because we're going into a whole lot of new new industries and because AI is going to use tools, and when AI uses tools, those are tools that we created for humans. AI is going to use Excel, AI is going to use Photoshop, AI is going to use logic synthesis tools, Synopsis tools, and Cadence tools. Those tools have to be super-accelerated, they're going to use databases they have to be super-accelerated because AI's are fast. And so I think in this era, we need to get all of the world's software now as fast as possible accelerated, and then put them in front of AI so that AI could agentically use them.
So is that a bit where we've already done this for a bunch of sectors and now we're going to do it for a bunch more?
JH: Yeah, a whole bunch more. For example, data processing.
Well, that was sort of a surprise. I didn't expect you to be opening with an IBM partnership.
JH: Yeah, right, that kind of puts it in perspective. I mean, they really started it all.
You wrote last week that AI is a five-layer cake: power, chips, infrastructure, models, and applications. Is there a concern that in the last four or five years, that you are worried about being squeezed into the chips box, so it's important to both remind people and also yourselves about you being this vertically integrated company — not just in terms of building systems, but into the entire software stack, you're not just a chip company.
JH: I guess my mind doesn't start with, "What I'm not", it starts with, "What do we need to be?". And back then, we realized that accelerated computing was a full stack problem, you have to understand the application to accelerate it. We realized that we had to understand the application, we had to have the developer ecosystem, we needed to have excellent expertise in algorithm development, because the old algorithms that were developed for CPUs don't work well for GPUs, so we had to rewrite, refactor algorithms so that they could be accelerated by our GPUs.
If we do that, though, you get 50 times speed up, 100 times speed up, 10 times speed up, and so it's totally worth it. I think since the very beginning, we realized, "Ok, what do we want to do, and what does it take to achieve that?".
Now, today we're building AI factories, we're building AI infrastructure all over the world. That's a lot more than building chips, and building chips is obviously important, it's the foundation of it.
Right, that's like one full stack of doing the networking and doing the storage, and now you're into CPUs.
JH: Now you've got to put it all together into these giant systems — a gigawatt factory is probably $50, $60 billion. Out of that $50, $60 billion, probably about, call it $15, $17 or so, is infrastructure: land, power, and shell. The rest of it is compute and networking and storage and things like that, and so that level of investment, unless you're helping customers achieve the level of confidence that they're going to succeed in building it, you just have no hope, nobody's going to risk $50 billion.
So I think that that's the big idea, that we need to help customers not just build chips, but build systems and then after we build systems, not just build systems, but build AI factories. AI Factories has a lot of software inside, it's not just our software, it's a ton of software for cooling management and electricals and things like that, and redundancies and a lot of it is over-designed, it's over-designed because nobody talked to each other.
When you have a lot of people who don't talk to each other, integrate systems, you have to, by definition, over-design your part of it. But if we're working together as one team, we'll make sure that we can push the limits and get more throughput out of the power that we have or save money for whatever throughput you want to have.
Just to go back to that software bit, you mentioned Excel wasn't designed to be used by AI. You have things like Claude has this new functionality to use Excel, so when you talk about that, you want to invest in these libraries, is that to enable models like that to do better? Or is that something for Microsoft or for enterprises — you want to use this, you don't want to be beholden to this sort of other player in the world?
JH: Well, SQL's a good example. SQL's used by people, and we bang on the SQL systems like anybody else, and it is the ground truth of businesses. Well, it's not just gonna be people banging on our SQL database now, it's gonna be a whole bunch of agents banging on it.
Right, they're gonna do it way faster.
JH: They're gonna need to do it way faster. And so the first thing we have to do is accelerate SQL, that's kind of the simple logic of it.
That makes sense. In terms of models, you noted that language models are only one category. "Some of the most transformative work is happening in protein AI, chemical AI, physical simulation, robots, and autonomous systems", and this is from the piece you wrote last week. You’ve previously made this point while noting in other keynotes, "Everything is a token", I think, is a phrase that you've used before. Do you see transformers as being the key to everything, or do we need new fundamental breakthroughs to enable these applications?
JH: We need all kinds of new models. For example, transformers, its ability to do attention scales quadratically, and so how do you have quite long memory? How can you have a conversation that lasts a very long time and not have the KV cache essentially become, over time, garbage?
Or have entire racks of solid-state drives that are holding KV cache.
JH: And of course, let's say that you were able to record all of our conversation, when you go back and reference some conversation, which part of the reference is most important? There needs to be some new architecture that thinks about attention properly and be able to process that very quickly.
We came up with a hybrid architecture of a transformer with an SSM, and that is what enables Nemotron 3 to be super intelligent and super efficient at the same time, that's an example.
Another example is coming up with models that are geometry aware, meaning a lot of things in life, in nature, are symmetrical. And so when you're generating these models, you don't want it to generate what is just statistically plausible, it has to also be physically based, and so it has to come out symmetrical. And so cuEquivariance, for example, allows you to do things like that.
So we have all these different technologies that are designed — or, for example, when we're generating tokens in words, it comes out in chunks at a time, little bits, tokens at a time, when you're generating motion, you need it to be continuous. And so there’s discrete information that you generate and understand, and there’s continuous information that you want to generate and understand. Transformers is not ideal for both.
Right, that makes sense.
Reasoning and Coding
One more quote from the piece, you write, "In the past year, AI crossed an important threshold. Models became good enough to be useful at scale. Reasoning improved. Hallucinations dropped. Grounding improved dramatically. For the first time, applications built on AI began generating real economic value". What specifically was that change? Because I think about the timing, I feel like this upcoming year is definitely about agents, I just wrote about it today — but for last year, was that the reasoning? Was that the big breakthrough?
JH: Generative, of course, was a big breakthrough, but it hallucinated a lot and so we had to ground it, and the way to ground it is reasoning, reflection, retrieval, search, so we helped it ground. Without reasoning, you couldn't do any of that, and so reasoning allowed us to ground the generative AI.
And once you ground it, then you could use that system to reason through problems and decompose it, and decompose it into things that you could actually do something about, and so the next generation was tool use. Turns out it probably tells you something that search was a service that nobody paid for, and the reason for that is getting information is very important and very useful but it's not something you pay for. The bar to reach to get somebody to pay you for something has to be higher than just information. "Where's a good restaurant?" — information is just, I don't think is worthy enough to get paid for. Some people pay for it, I pay for it.
We now know that we've now crossed that threshold. Not only is it able to converse with us and generate information for us, it can now, of course, do things for us. Coding is just a perfect example for that. If you think about it for a second, you realize this, coding is not really the same modality as language, you have to teach it empty spaces and indentations and symbols, it's almost like a new modality and you can't generate code just one token at a time, you have to reflect on the chunk of code. That chunk of code has to be factored properly and has to be optimal and has to obviously compile, it has to be grounded not on probable truth, it has to be grounded on execution.
Right, does it run or not?
JH: It has to run or not. And so I think the code, learning that modality was a big deal. Once you're able to now do — we pay engineers several hundred thousand dollars a year to code, and so now they have a coding assistant. They could think about architecture. Instead of describe programs in code, which is very laborious, they can now describe software in specification, which is much more abstract and allows them to be much more productive. And so they describe specification, architecture, they're able to use their time to solve and innovate, and so our software engineers 100% use coding agents now. Many of them haven't generated a line of code in a while, but they're super productive and super busy.
Do you think there is a temptation to over-extrapolate from coding, though, precisely because it's verifiable? You have this agent idea where they can go — it's not just that they will generate code, then they can actually verify it, see if it works, if it doesn't, they can go back and do it again, and this can happen without humans because there's a clear, "Does it work or not?".
JH: Well, because you can reflect, you could have, let's say, design a house. Designing a house or designing a kitchen used to be the work of architects, designers, but now you could have carpenters do that. So now you elevated the capability of a carpenter, now you use an agent for that carpenter to go design a house, design a kitchen, come up with some interesting styles. The agent doesn't have some tool to execute.
However, you could give an example. You say, "these are the styles I'm looking for, I want it to be aesthetic like that". Because the agent is able to reflect, is able to compare its quality of code, its quality of result against some reference, it could say, "You know what, it didn't turn out as well as I hope, I'm going to go back at it again", and so it iterates. It doesn't have to be fully executable, in fact, the more probabilistic, the more aesthetic, the more subjective, if you will, AI actually does better.
Right, well that's why you almost have two extremes. You have generating images where there's no right answer and then you have coding where there is a right answer and AI seems to do good on those sides and the question is how much will it collapse into the middle there.
JH: We're fairly certain it could do architecture now, we're fairly certain it could design kitchens and living rooms.
The Role of CPUs in Accelerated Computing
Well, to this point, one of the big things with agents coming online is, you've talked a lot about accelerated computing, I think you've trash talked as it were, maybe the CPUs to the day, they're all gonna be removed, like everything's gonna be accelerated. Suddenly CPUs are hot again. It turns out they're pretty useful and important to the extent you are selling CPUs now, how's it feel to be a CPU salesman?
JH: There's no question that Moore's law is over. Accelerated computing is not parallel computing. Go back in time — 30 years ago, there were probably 10, 20, 30 parallel computing companies, only one survived, Nvidia and the reason why is because we had the good wisdom of recognizing the goal wasn't to get rid of the CPU, the goal was to accelerate the application.
So what I just falsely accused you of was actually true for everybody else.
JH: We were never against CPUs, we don't want to violate Amdahl's Law. Accelerated computing, in fact, inside our systems, we choose the best CPUs, we buy the most expensive CPUs, and the reason for that is because that CPU, if not the best and not the most performant, holds back millions of dollars of chips.
When it comes to branch prediction, you worried about wasting CPU time, now you're worried about wasting GPU time.
JH: That's right, you just never can have GPUs be squandered, GPU time be idle. And so we always use the best CPUs to the point where we went and built Grace so that we could have the highest performance single-threaded CPU and move data around a lot faster. And so accelerated computing was never against CPUs, my basis is still true that Amdahl's Law is over, the idea that you would use general purpose computing and just keep adding transistors, that is so dead, and so I think fundamentally we're not against CPUs.
However, these agents are now able to do tool use, and the tools that they want to use are tools created for humans and they're basically two types. There’s the stuff that we run in data centers and most of it is SQL, most of it is database related, and the other type is personal computers. We're now going to have AIs that are able to learn unstructured tool use, the first type of tool use is structured. CLIs are tool use, APIs, they're all structured tool use, the commands are very explicit, the arguments are explicit, the way you talk to that application is very specific.
However, there’s a whole bunch of applications that were never designed to have CLIs and APIs and those tools need AIs to learn multi-modality, unstructured, and it has to go and be able to go surf a website and it has to be able to recognize buttons and pull down menus and just kind of work its way through it like we do. That tool use are going to want to use PCs and we have both sides, we have incredibly great data processing systems, and as you know, Nvidia's PCs are the most performant in the world.
So what makes an agent-focused CPU different from other CPUs? So you're going to have a rack of just Vera CPUs.
JH: Oh, really good, excellent. So the way that CPUs were designed in the last decade, they were all designed for hyperscale cloud and the way that hyperscale cloud monetizes CPUs is by the CPU core. So you want to design CPUs that have as many cores as possible that are rentable, the performance of it is kind of secondary.
You're dealing with web latency by and large.
JH: That's exactly right, exactly. And so the number of CPU instances is what you're optimizing for. That's why you see these CPUs with a couple of hundred, 300, 400 cores coming. Well, they're not performant and for tool use, where you have this GPU waiting for the tool use—
And you're going over NVLink.
JH: That's right, you want the fastest single-threaded computer you can possibly get.
So is it just the speed? Or does the CPU itself need to be increasingly parallel so it doesn't have misses and things like that? Or so it's like just all the way down the pipeline is very different?
JH: Yeah, the most important thing is single-threaded performance and the I/O has to be really great. Because it's now in the data center, the number of single-threaded instances running is going to be quite high and therefore, it's going to bang on the I/O system, it's going to bang on the memory controller really hard. Vera's bandwidth-per-CPU core, bandwidth-per-CPU, is three times higher than any CPU that's ever been designed, and so it's designed so that it has lots and lots of I/O bandwidth and lots and lots of memory bandwidth, so that it never throttles the CPU. If the CPU gets throttled, then we're holding back a whole bunch of GPUs.
Is this Vera rack, is it still, you talked about it being very tightly linked to the GPU rack, but is it still disaggregated so that the GPUs can be serving multiple different Vera cores? Whereas you have a Vera core on a board with-
JH: Yeah.
Okay, got it, that makes sense. How does your Intel partnership and the NVLink thing fit into this, if at all?
JH: Excellent. Some of the world is happy with Arm, some of the world still needs, particularly, you know, enterprise computing, a whole bunch of stacks that people don't want to move and so x86 is really important to that.
Has the resiliency of x86 code been surprising to you?
JH: No. Nvidia's PC is still x86, all of our workstations are x86.
Groq
I did want to congratulate you, as you talked about in the keynote today, you are the token king. So in your article, you also talked about that energy is the first principle of AI infrastructure and the constraint on how much intelligence the system can produce. If that’s the case, if it's the amount of tokens you can produce and you're constrained by how much energy is in the data center, why do companies even try to compete with the token king?
JH: It's going to be hard because it's not reasonable to build a chip and somehow achieve results that are fairly dramatic. Even in the case of Groq, Groq couldn't deliver the results unless we paired it with Vera Rubin.
Well tell me about this, my next question was about Groq.
JH: So if you look at the entire envelope of inference, on the one hand, you want to deliver as much throughput as possible, on the other hand, you want to deliver as many smart tokens as possible — the smarter the token, the higher price you could charge. These two balance, this tension of maximizing throughput on the one hand, maximizing intelligence on the other hand, is really, really tough to work out.
I do have to say, last year you had a slide talking about this Pareto Curve, and you talked about, I think it was when you introduced Dynamo, how your GPUs could cover the whole thing, and so you didn't have to think about it, just buy an Nvidia GPU, and Dynamo will do both. But now you're here saying, "Well, it doesn't quite cover the whole thing".
JH: We cover the whole thing still better than any system that can do it. Where we could extend that Pareto is particularly on the extremely high token rates and extremely low latency, but it also reduces the throughput. However, because of coding agents, because they're now AI agents that are producing really, really great economics, and because the agents are being attached to humans that are actually making extremely, I mean, they're extremely valuable.
Right, they're even more expensive than GPUs.
JH: And so I want to give my software engineers the highest token rate service, and so if Anthropic has a tier of Anthropic Claude Code that increases coding rate by a factor of 10, I would pay for it, I would absolutely pay for it.
So you're building this product for yourself?
JH: I think most great products are kind of because you see a pain point and you feel the pain point and you know that that's where the market's going to go. We would love for our coding agents to run 10 times faster, but in order to do that, it's just very, very difficult to do that in a high throughput system and so we decided to add the Groq low latency system to it and then we basically co-run, co-process.
Right. And is this just separating decode and prefill?
JH: We're going to do even the high processing, high FLOPS part of decode, attention part of decode.
So you're disaggregating even down to the decode level.
JH: That's right, and that requires really tight coupling and really, really close integration of software.
So how are you able to do that? You say you're shipping later this year, this deal was just announced a couple of months ago.
JH: Well, we started working on disaggregated inferencing, Dynamo really put Nvidia's ideas on the table. The day that I announced Dynamo, everybody should have internalized that, I was already thinking about, "How do we disaggregate inference across a heterogeneous infrastructure more finely?", and Groq's architecture is such an extreme version of ours, they had a very hard time.
Dynamo was a year ago, and Groq just happened sort of over Christmas. Was there an event that sort of made you think this needed to happen?
JH: Well remember, I announced Dynamo a year ago, we've been working on Dynamo for two years, so we've been thinking about disaggregated inference thing for two, three years, and we started working with Groq maybe before we announced the deal, maybe six months earlier. So we've been thinking about working with them about unifying Grace Blackwell and Groq fairly early on.
So the interaction with them, I really like the team and we don't want their cloud service. They had another business that they really believe in and they still believe in, they're doing really well with it and that wasn't a part of the business that we wanted, so we decided to acquire the team and license the technology. Then we'll take the fundamental architecture and we’ll...
🚀 想要体验更好更全面的AI调用?
欢迎使用青云聚合API,约为官网价格的十分之一,支持300+全球最新模型,以及全球各种生图生视频模型,无需翻墙高速稳定,文档丰富,小白也可以简单操作。
评论区