Skip to content
4 Takeaways from Nvidia’s GTC Keynote
Nvidia
Jensen Huang’s keynote reinforced that Nvidia’s product roadmap remains years ahead of AMD and custom silicon. He also provided further support for his recent earnings call comments, emphasizing that the AI infrastructure buildout is still in its early stages. However, the optimistic outlook was met with some skepticism, as investor focus shifts to the next critical datapoint on the health of the AI trade: hyperscaler CapEx reports due in late April. I continue to believe the AI buildout remains in its infancy, positioning Nvidia to be a multi-year beneficiary.

Key Takeaways

Roughly a third of Jensen Huang’s two-hour keynote was dedicated to reinforcing the view that we’re still in the early stages of the AI buildout. Despite this, investor skepticism remains, underscored by the fact that Nvidia is already trading at 20× estimated CY2026 EPS, and should grow earnings at 30%.
Jensen dismissed the need for Hoppers now that Blackwell is available. My take: He’s so confident in demand that he just threw Hopper under the bus.
Nvidia’s annual performance improvements are intoxicating to companies building AI models, as each year the hardware delivers roughly 4× better performance.
Nvidia’s deal with GM to support autonomous driving won’t move the needle. The upside is that Nvidia already counts autonomy winners like Tesla and Waymo among its customers.
1

We're still early in the AI buildout

Jensen emphasized that AI is still in its early stages, making the case that the industry is significantly underestimating AI’s computational requirements. He pointed to the acceleration of scaling laws and the rise of agentic AI, which is driving the adoption of reasoning models.

Scaling Laws:
Scaling laws refer to the observation that AI performance improves in a smooth, log-linear fashion—meaning more data, larger models, and increased compute lead to better results in a fairly predictable way. While Jensen didn’t provide new data to support that scaling laws continue to hold, the fact that he reiterated this belief is noteworthy. After the “DeepSeek scare” in late January, OpenAI’s Sam Altman and the hyperscalers echoed similar views, reinforcing confidence in scaling laws—consistent with Jensen’s stance.

Reasoning Models:
Jensen reiterated the point he made on the January earnings call: that we are transitioning from an era of simple, one-shot prompts—where users receive a quick answer in seconds—to an era of reasoning models that simulate cognitive processes. In these models, compute requirements increase exponentially.

He suggested that next-generation models could require hundreds, thousands, or even millions of times more compute than today’s simple prompts. That raises an important question: is this realistic?

I believe the answer is yes.

In the future, we will likely need 100× more compute to support reasoning models. One simple way to frame this is by measuring how long a model “thinks” as a proxy for how much compute it requires. From my own experience, a low-level reasoning prompt can take 2–10 minutes to process. Using a midpoint of 5 minutes implies 300 seconds of compute time—150–300× longer than a one-shot prompt. This back-of-the-envelope estimate aligns with Jensen’s bullish outlook.

The Big Picture:
The combination of scaling laws and the emergence of reasoning models lays the groundwork for a significant expansion in datacenter buildouts over the next few years. At GTC, Jensen reminded us:

“I’ve said before that I expect data center build-out to reach $1 trillion. And, I am fairly certain we’re going to reach that very soon.”

In 2025, we estimate the build-out will reach $485B. I consider Jensen’s “very soon” to be 2028.

Investor Sentiment:
Despite Jensen’s argument that we’re still early in the AI cycle, investor skepticism remains, underscored by the fact that Nvidia trades at 20× estimated CY2026 EPS, with expected 30% earnings growth next year. This view will be tested in the coming weeks, as investor focus shifts to late April—when hyperscalers like Google, Amazon, and Microsoft report earnings. Updated CapEx expectations will be the pressure point for both Nvidia demand and the broader AI trade.

2

Blackwell Demand

Jensen downplayed the need for customers to continue purchasing Hopper GPUs, reinforcing confidence in Blackwell demand. As he put it:

“I’ve said before, when Blackwell starts shipping in volume, you couldn’t give Hoppers away.”

The fact that he is openly discouraging Hopper purchases suggests strong confidence in Blackwell uptake. If demand for Blackwell were only in line with—or below—expectations, he likely wouldn’t risk deterring customers from buying Hopper, which remains an important product for the company and will likely account for 30% of sales in the April quarter.

3

Performance Improvements

The GTC product announcements at included: Blackwell Ultra, Rubin, and co-packaged optics (CPO) – a technology that brings optics closer to the GPU to save power, which allows data centers to deploy more GPUs.

Jensen emphasized the steady AI compute performance growth across generations. From Hopper (2022) to Blackwell (2024) to Rubin (2026+), Nvidia has consistently achieved a ~4× AI compute increase per generation. Put simply, each new generation is delivering significantly more bang for the buck. These performance gains—delivered at roughly the same cost—increase the likelihood that customers remain loyal to Nvidia.

In the most simple terms, imagine the performance of these iterations training an AI model like ChatGPT:

  • Hopper (2022): Trains in 6 months.
  • Blackwell (2024): Trains in 1.5 months.
  • Rubin Ultra (2027): Trains in ~3 days.

Bottom line: Nvidia is shrinking AI training time from months to days every few years. That means that by 2030, the training time would likely be less than a day.

4

The autonomy opportunity

Nvidia announced a deal with GM to support its autonomous driving ambitions. However, optimism around GM’s prospects in autonomy is limited—especially after the company shut down Cruise in December. Catching up to leaders like Waymo and Tesla is a tall order in what increasingly appears to be a winner-takes-most market.

The good news is that Nvidia already works with both Tesla and Waymo, with Jensen noting:

“Waymo robotaxis are incredible… Tesla uses a lot of NVIDIA GPUs in the data center.”

Given GM’s recent retreat from autonomy, skepticism about its long-term strategy in this space remains warranted.

Disclaimer

Subscribe to our newsletter

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Back To Top