Skip to content

“This Is the Beginning of Something Very, Very Big”: Nvidia’s Jensen Huang on AI-Native Companies

Read Editorial Disclaimer
Disclaimer: Perspectives here reflect AI-POV and AI-assisted analysis, not any specific human author. Read full disclaimer — issues: report@theaipov.news

Alongside established giants like Walmart, L’Oréal, JPMorgan Chase and Toyota, a new category of firms is emerging at the centre of the AI economy. Nvidia CEO Jensen Huang describes these as AI-native companies — organisations built around artificial intelligence as their core product or infrastructure rather than as a bolt-on feature.

In his GTC keynote, Huang argued that these AI-native firms are part of what he has elsewhere called “the beginning of something very, very big”: a new computing platform shift that will produce its own generation of dominant companies, just as previous shifts produced Microsoft, Google, Amazon, Meta and others.

What Nvidia means by ‘AI-native companies’

Huang’s description of AI-native companies covers a broad spectrum. Some of these firms develop foundation models. Others build tools, frameworks or APIs around those models. Still others integrate AI capabilities deeply into specific vertical applications — from finance and logistics to design and entertainment.

Only a few of the most prominent players are household names, such as OpenAI and Anthropic. Beneath that surface layer sits a rapidly expanding long tail of startups and growth-stage companies operating across different industries. What they share is that AI is not a side project or an add-on; it is the organising principle of the business.

A funding wave measured in hundreds of billions

One of the clearest indicators of this shift has been the scale of venture investment. Huang cited figures showing that venture funding into AI startups reached roughly $150 billion in the last two years — the largest wave of startup capital this segment has seen.

Whereas previous generations of software startups typically raised a few million or tens of millions of dollars, many AI-native companies now raise hundreds of millions or even billions. That change reflects both investor expectations about the size of the opportunity and the much higher cost of building and deploying frontier AI systems.

Why compute and tokens drive AI-native business models

A major reason for the increase in capital requirements is the demand for compute. Training and running modern AI models requires extremely large amounts of computing power, particularly when those models are used to generate or process vast numbers of tokens — the basic units handled by language models and related systems.

Some AI-native companies generate tokens by training and operating their own models. Others build services on top of tokens produced by providers such as OpenAI or Anthropic, effectively turning those tokens into a raw material for higher‑level applications. In both cases, the business model is tightly linked to access to compute and to the efficiency of converting that compute into useful outputs.

That demand has grown further as models have added explicit reasoning. When an AI system breaks a complex problem into smaller steps it can understand and then grounds those steps in available research and evidence, it becomes more reliable — but it also uses far more input tokens for context and many more output tokens as it “thinks” through steps. Models such as OpenAI’s o1 introduced this kind of reasoning; the result was a significant increase in the credibility of generative AI and faster adoption of tools like ChatGPT, but also a dramatic rise in computational workload per query.

Agentic systems and how engineers work now

Another major step has been agent-based systems such as Claude Code. Unlike traditional chat-based models, these can interact with real tools: they read files, analyse source code, compile programs, run tests, evaluate results and iterate. Many engineering teams now use a combination of AI coding tools including Claude Code, OpenAI Codex and Cursor IDE; in many organisations almost every software engineer works with one or more AI assistants during development.

That shift also changes how people interact with AI. Earlier systems were mostly used for information queries — what, where or when. Agent-based systems are given instructions to create, build or execute; they can access context, read project files, use external tools and break tasks into steps, then reason and reflect on intermediate results until the task is done. This progression — from perception to generation, then reasoning, then agentic AI that performs real work — has caused a large increase in computing demand, especially for AI inference, and has made GPU capacity scarce in many markets even as vendors like Nvidia ship in volume.

AI-native firms as products of a new computing platform

Huang located these companies within a longer history of computing shifts. In the personal computer era, companies like Microsoft became the standard-bearers of the new platform. In the internet era, firms such as Google and Amazon emerged as dominant players. The mobile and cloud era created companies like Meta and other platform-native businesses.

The current AI platform shift, he argued, is expected to produce another generation of highly influential companies. AI-native firms are not just using AI tools; they are built on top of a computing platform defined by large‑scale models, specialised hardware and extensive software stacks. Nvidia’s contention is that its accelerated computing platform and libraries are one of the foundational layers on which those companies are being built.

How incumbents and AI-native firms intersect

Established enterprises and AI-native companies are not evolving in isolation. Large organisations like Walmart, L’Oréal, JPMorgan Chase and Toyota are adopting AI platforms to reshape their operations, while AI-native startups develop the tools, models and services that those incumbents increasingly rely on.

In practice, that means enterprise workloads are moving onto AI-native infrastructure and frameworks, many of them accelerated by Nvidia’s hardware and software. At the same time, some AI-native companies depend on data, distribution and partnerships with incumbents to reach scale. Huang’s keynote suggested that this interplay between old and new is part of what makes the current moment different from earlier waves of purely consumer‑driven tech disruption.

Sources

  • Keynote remarks by Nvidia CEO Jensen Huang on AI-native companies and venture funding in the AI startup ecosystem
  • Venture investment data on AI startups and mega‑rounds over the past two years
  • Public profiles and reporting on firms such as OpenAI, Anthropic and other AI-native companies building foundation models, tools and vertical applications

Related Video

Related video — Watch on YouTube
Read More News
Mar 16

New Zealand’s petrol pain is really a subsidy war between drivers and EV buyers

Mar 16

Closing the Kennedy Center is really a warning shot at Washington’s arts class

Mar 16

What the Kennedy Center fight reveals about who really controls U.S. culture funding

Mar 16

Vanity Fair’s Oscar party turns awards night into a celebrity brand marketplace

Mar 16

Copyright lawsuits against OpenAI are really about who owns the language we use

Mar 16

GTC 2026 will reveal how far behind the rest of Big Tech is on AI infrastructure

Mar 16

Nvidia is using GTC 2026 to lock AI developers into its ecosystem for a decade

Mar 16

Trump’s threats over Iranian oil routes signal a larger election-year energy gamble

Mar 16

U.S. voters will feel the Hormuz crisis at the pump long before the battlefield

Mar 16

Why Grace Blackwell and Rubin Multiply Revenue Capacity Across Every Token Tier

Mar 16

How Nvidia and Groq LP300 Plus Dynamo Unlock 35× on the Highest-Value Inference Tier

Mar 16

Inside Vera Rubin Ultra: Liquid-Cooled Racks for the Next Generation of AI Factories

Mar 16

How Token Pricing Tiers Will Reshape the AI Economy

Mar 16

Inside the AI Token Factory: Why Tokens Became the New Commodity of Computing

Mar 16

From DGX-1 to Rubin: How Nvidia Turned Data Centres into AI Factories

Mar 16

From Retrieval to Generation: How ChatGPT Marked the Start of Nvidia’s Generative AI Era

Mar 16

From Perception to Agentic AI: How Reasoning and Coding Agents Changed the Game

Mar 16

The Inference Inflection Point: Why AI Computing Demand Grew a Million Times in Two Years

Mar 16

Healthcare Enters Its ‘ChatGPT Moment’ on Nvidia’s Accelerated Platform

Mar 16

Inside the Trillion-Dollar Industries Powering Nvidia’s AI Infrastructure Boom

Mar 16

Jensen Huang Explains Why Nvidia Is ‘Vertically Integrated but Horizontally Open’

Mar 16

Nvidia, Palantir and Dell Team Up on Air-Gapped AI Platforms

Mar 16

Nvidia CEO Jensen Huang Maps Out the AI Cloud Future in Live Keynote

Mar 16

Team USA’s Route to the Gold Medal Game Says More About the Field Than the Score

Mar 16

Jessie Buckley and the Oscars Narrative Ireland Wants to Tell

Mar 16

Winter Storm Wisconsin Updates: What We Know So Far

Mar 16

Why Iran Chose This Moment to Escalate the Strait of Hormuz Crisis

Mar 16

What the Oscars 2026 Winners Mean for Streaming Services and Theater Chains

Mar 16

The Last Time Oil Hit $100 During a Middle East Crisis, Recession Followed Within Months

Mar 16

Why Matchday Prep Stories Like Real Sociedad’s Rain Session Get Pushed as News

Mar 16

Trump’s Oil Infrastructure Threat Signals a Shift Away From Diplomatic Containment

Mar 16

Intuit’s Buyback Gambit Shows How AI Panic Is Warping Wall Street

Mar 16

Gas Prices Over $100 Per Barrel Will Force Fed to Choose Between Inflation Control and Economic Growth

Mar 16

Severe Weather Sunday and Monday: What We Know So Far

Mar 16

Why Meteorologists Keep Calling It the ‘Last’ Cold Front