Skip to content

Nvidia, Palantir and Dell Team Up on Air-Gapped AI Platforms

Read Editorial Disclaimer
Disclaimer: Perspectives here reflect AI-POV and AI-assisted analysis, not any specific human author. Read full disclaimer — issues: report@theaipov.news

In his latest keynote, Nvidia CEO Jensen Huang singled out a new kind of partnership that brings together Nvidia, Palantir and Dell to stand up what he described as a brand new type of AI platform. The three companies, Huang said, are working together around the Palantir Ontology platform so organisations can deploy powerful AI systems on top of Nvidia’s accelerated computing stack even in tightly controlled, air‑gapped environments.

Huang framed the effort as part of a broader shift in how enterprises will run AI in sensitive settings. By combining Nvidia’s hardware and software libraries with Palantir’s data and application layer and Dell’s infrastructure footprint, the group aims to give customers a way to run modern AI on data that cannot leave a specific country, network or facility.

What is the Palantir Ontology AI platform?

During the keynote, Huang described the Palantir Ontology platform as an AI platform that can now be stood up together with Nvidia and Dell in “any country” and in “any air‑gapped region”. In practice, that means customers can bring Nvidia’s accelerated computing into places where strict security, sovereignty or regulatory requirements make it difficult to rely on public cloud services alone.

The platform is designed to sit on top of Nvidia’s accelerated computing and AI stack. That stack stretches from data processing libraries—covering both vector and structured data—through to AI model execution, all running on Nvidia GPUs. Palantir’s Ontology layer organises operational and analytical data so that organisations can build applications and decision systems on top, while Dell provides the underlying infrastructure that can be deployed in data centres and specialised environments.

Huang’s comments underscored that this is not just about moving a single application into an isolated network. Instead, the goal is to recreate a full AI platform in places that, for legal or security reasons, have to be sealed off from the broader internet. In those settings, air‑gapped deployments allow governments, defence organisations and regulated industries to run AI on sensitive datasets while maintaining tight control over where that data lives.

Why air‑gapped AI platforms matter

The push into air‑gapped AI reflects a reality that not every workload can sit comfortably in the public cloud. Many of the sectors that are most interested in AI—such as defence, critical infrastructure, healthcare and parts of financial services—operate under rules that make it difficult or impossible to move certain data out of tightly controlled environments.

By standing up Ontology‑based AI platforms that run on Nvidia’s accelerated computing stack and Dell’s infrastructure, the three companies are trying to meet those customers where they are. Rather than forcing sensitive workloads into a one‑size‑fits‑all cloud model, they are bringing modern AI tooling into environments that can be physically and logically separated from the public internet.

Huang portrayed this arrangement as a way to extend the benefits of accelerated computing to regions and sectors that might otherwise be left behind. With a reproducible pattern that can be deployed “in any country, in any air‑gapped region”, customers can build AI applications that respect local rules on data residency and security while still taking advantage of Nvidia’s libraries and Palantir’s data models.

How this fits into Nvidia’s broader platform strategy

The Palantir and Dell partnership fits into a broader theme that ran throughout Huang’s keynote: Nvidia wants its accelerated computing platform to be available wherever customers need to run AI, whether that is in a hyperscale cloud, an on‑premises data centre, or inside an isolated network.

Huang emphasised that Nvidia’s libraries cover a wide range of use cases, from the handling of vector and structured data to the execution of AI models across different deployment scenarios. By integrating those libraries into platforms like Palantir’s Ontology and pairing them with partners that can deliver infrastructure into specialised environments, Nvidia is extending the reach of its platform into places that are difficult to serve with public cloud alone.

For enterprises and public institutions that must keep their most sensitive data within strict boundaries, the message from Huang’s keynote was clear: they do not have to give up on state‑of‑the‑art AI. Instead, they can adopt air‑gapped platforms assembled from components supplied by Nvidia, Palantir and Dell, and run them on hardware and software stacks tuned specifically for accelerated computing.

How organisations might use these platforms in practice

Huang’s remarks point toward use cases where data sensitivity and operational control matter as much as raw compute. Defence and national security organisations, for example, may want to run AI on classified intelligence or mission data that can never be connected to the open internet. Critical infrastructure operators may need to analyse sensor feeds and operational logs inside tightly secured facilities. Healthcare and financial firms may have regulatory obligations to keep certain records within specific jurisdictions or networks.

In each of those scenarios, an air‑gapped Ontology platform running on Nvidia’s accelerated stack and Dell’s infrastructure could serve as the foundation for applications that bring AI to the data rather than the other way around. Huang’s emphasis on being able to stand up these platforms “in any country” reflects the importance of adapting to local rules while still giving organisations access to modern AI tooling.

Although the exact deployments will vary by customer and sector, the central idea is consistent with Huang’s broader message: accelerated computing has to be tailored to domains and applications, not just delivered as generic hardware. The Palantir and Dell partnership is one concrete example of how Nvidia is trying to make that principle real for some of the most sensitive environments in the world.

Sources

  • Live keynote remarks by Nvidia CEO Jensen Huang on partnerships with Palantir and Dell and deployment of AI platforms in air‑gapped regions
  • Public documentation and announcements from Nvidia, Palantir and Dell describing their collaboration around AI platforms and on‑premises infrastructure

Related Video

Related video — Watch on YouTube
Read More News
Mar 16

Why Grace Blackwell and Rubin Multiply Revenue Capacity Across Every Token Tier

Mar 16

How Nvidia and Groq LP300 Plus Dynamo Unlock 35× on the Highest-Value Inference Tier

Mar 16

Inside Vera Rubin Ultra: Liquid-Cooled Racks for the Next Generation of AI Factories

Mar 16

How Token Pricing Tiers Will Reshape the AI Economy

Mar 16

Inside the AI Token Factory: Why Tokens Became the New Commodity of Computing

Mar 16

From DGX-1 to Rubin: How Nvidia Turned Data Centres into AI Factories

Mar 16

“This Is the Beginning of Something Very, Very Big”: Nvidia’s Jensen Huang on AI-Native Companies

Mar 16

From Retrieval to Generation: How ChatGPT Marked the Start of Nvidia’s Generative AI Era

Mar 16

From Perception to Agentic AI: How Reasoning and Coding Agents Changed the Game

Mar 16

The Inference Inflection Point: Why AI Computing Demand Grew a Million Times in Two Years

Mar 16

Healthcare Enters Its ‘ChatGPT Moment’ on Nvidia’s Accelerated Platform

Mar 16

Inside the Trillion-Dollar Industries Powering Nvidia’s AI Infrastructure Boom

Mar 16

Jensen Huang Explains Why Nvidia Is ‘Vertically Integrated but Horizontally Open’

Mar 16

Nvidia CEO Jensen Huang Maps Out the AI Cloud Future in Live Keynote

Mar 16

Team USA’s Route to the Gold Medal Game Says More About the Field Than the Score

Mar 16

Jessie Buckley and the Oscars Narrative Ireland Wants to Tell

Mar 16

Winter Storm Wisconsin Updates: What We Know So Far

Mar 16

Why Iran Chose This Moment to Escalate the Strait of Hormuz Crisis

Mar 16

What the Oscars 2026 Winners Mean for Streaming Services and Theater Chains

Mar 16

The Last Time Oil Hit $100 During a Middle East Crisis, Recession Followed Within Months

Mar 16

Why Matchday Prep Stories Like Real Sociedad’s Rain Session Get Pushed as News

Mar 16

Trump’s Oil Infrastructure Threat Signals a Shift Away From Diplomatic Containment

Mar 16

Intuit’s Buyback Gambit Shows How AI Panic Is Warping Wall Street

Mar 16

Gas Prices Over $100 Per Barrel Will Force Fed to Choose Between Inflation Control and Economic Growth

Mar 16

Severe Weather Sunday and Monday: What We Know So Far

Mar 16

Why Meteorologists Keep Calling It the ‘Last’ Cold Front

Mar 16

Dan Crenshaw on Face the Nation: The Real Message Behind the Sound Bites

Mar 16

Strait of Hormuz Blockade Hands China Leverage Over Global Oil Markets

Mar 16

The 2026 Oscars Winners Prove Hollywood Is Still Afraid of Real Risk

Mar 16

How a Single Tornado Watch Can Expose Every Weak Spot in a County’s Emergency Planning

Mar 16

Chatham County Tornado Watch: What We Know So Far About Today’s Severe Weather Risk

Mar 16

We’ve Been Here Before: What Past Hormuz Crises Say About Today’s Oil Shock

Mar 16

Trump’s Threats Over Iran’s Oil Lifelines Are Really A Message to Beijing

Mar 16

Iran’s Grip on Hormuz Shows How Fragile the $100 Oil World Really Is

Mar 16

Everyone Talks About Tankers, but Hormuz Tensions Really Expose U.S. Military Overstretch