Alongside established giants like Walmart, L’Oréal, JPMorgan Chase and Toyota, a new category of firms is emerging at the centre of the AI economy. Nvidia CEO Jensen Huang describes these as AI-native companies — organisations built around artificial intelligence as their core product or infrastructure rather than as a bolt-on feature.
In his GTC keynote, Huang argued that these AI-native firms are part of what he has elsewhere called “the beginning of something very, very big”: a new computing platform shift that will produce its own generation of dominant companies, just as previous shifts produced Microsoft, Google, Amazon, Meta and others.
What Nvidia means by ‘AI-native companies’
Huang’s description of AI-native companies covers a broad spectrum. Some of these firms develop foundation models. Others build tools, frameworks or APIs around those models. Still others integrate AI capabilities deeply into specific vertical applications — from finance and logistics to design and entertainment.
Only a few of the most prominent players are household names, such as OpenAI and Anthropic. Beneath that surface layer sits a rapidly expanding long tail of startups and growth-stage companies operating across different industries. What they share is that AI is not a side project or an add-on; it is the organising principle of the business.
A funding wave measured in hundreds of billions
One of the clearest indicators of this shift has been the scale of venture investment. Huang cited figures showing that venture funding into AI startups reached roughly $150 billion in the last two years — the largest wave of startup capital this segment has seen.
Whereas previous generations of software startups typically raised a few million or tens of millions of dollars, many AI-native companies now raise hundreds of millions or even billions. That change reflects both investor expectations about the size of the opportunity and the much higher cost of building and deploying frontier AI systems.
Why compute and tokens drive AI-native business models
A major reason for the increase in capital requirements is the demand for compute. Training and running modern AI models requires extremely large amounts of computing power, particularly when those models are used to generate or process vast numbers of tokens — the basic units handled by language models and related systems.
Some AI-native companies generate tokens by training and operating their own models. Others build services on top of tokens produced by providers such as OpenAI or Anthropic, effectively turning those tokens into a raw material for higher‑level applications. In both cases, the business model is tightly linked to access to compute and to the efficiency of converting that compute into useful outputs.
That demand has grown further as models have added explicit reasoning. When an AI system breaks a complex problem into smaller steps it can understand and then grounds those steps in available research and evidence, it becomes more reliable — but it also uses far more input tokens for context and many more output tokens as it “thinks” through steps. Models such as OpenAI’s o1 introduced this kind of reasoning; the result was a significant increase in the credibility of generative AI and faster adoption of tools like ChatGPT, but also a dramatic rise in computational workload per query.
Agentic systems and how engineers work now
Another major step has been agent-based systems such as Claude Code. Unlike traditional chat-based models, these can interact with real tools: they read files, analyse source code, compile programs, run tests, evaluate results and iterate. Many engineering teams now use a combination of AI coding tools including Claude Code, OpenAI Codex and Cursor IDE; in many organisations almost every software engineer works with one or more AI assistants during development.
That shift also changes how people interact with AI. Earlier systems were mostly used for information queries — what, where or when. Agent-based systems are given instructions to create, build or execute; they can access context, read project files, use external tools and break tasks into steps, then reason and reflect on intermediate results until the task is done. This progression — from perception to generation, then reasoning, then agentic AI that performs real work — has caused a large increase in computing demand, especially for AI inference, and has made GPU capacity scarce in many markets even as vendors like Nvidia ship in volume.
AI-native firms as products of a new computing platform
Huang located these companies within a longer history of computing shifts. In the personal computer era, companies like Microsoft became the standard-bearers of the new platform. In the internet era, firms such as Google and Amazon emerged as dominant players. The mobile and cloud era created companies like Meta and other platform-native businesses.
The current AI platform shift, he argued, is expected to produce another generation of highly influential companies. AI-native firms are not just using AI tools; they are built on top of a computing platform defined by large‑scale models, specialised hardware and extensive software stacks. Nvidia’s contention is that its accelerated computing platform and libraries are one of the foundational layers on which those companies are being built.
How incumbents and AI-native firms intersect
Established enterprises and AI-native companies are not evolving in isolation. Large organisations like Walmart, L’Oréal, JPMorgan Chase and Toyota are adopting AI platforms to reshape their operations, while AI-native startups develop the tools, models and services that those incumbents increasingly rely on.
In practice, that means enterprise workloads are moving onto AI-native infrastructure and frameworks, many of them accelerated by Nvidia’s hardware and software. At the same time, some AI-native companies depend on data, distribution and partnerships with incumbents to reach scale. Huang’s keynote suggested that this interplay between old and new is part of what makes the current moment different from earlier waves of purely consumer‑driven tech disruption.
Sources
- Keynote remarks by Nvidia CEO Jensen Huang on AI-native companies and venture funding in the AI startup ecosystem
- Venture investment data on AI startups and mega‑rounds over the past two years
- Public profiles and reporting on firms such as OpenAI, Anthropic and other AI-native companies building foundation models, tools and vertical applications