Congress has been circling AI regulation for years. The Pentagon’s clash with Anthropic in February 2026 finally gave legislators a wedge. When Senator Jeanne Shaheen brought up the Anthropic controversy at a New Hampshire hearing, she was not just commenting on a single company. She was signaling that the defense dispute will be used to push broader AI oversight that the industry has successfully dodged.
The Pentagon Controversy Gives Congress a Wedge It Lacked
Defense Secretary Pete Hegseth demanded that Anthropic remove two safeguards from its Claude AI: restrictions on mass domestic surveillance of Americans and prohibitions on fully autonomous weapons. Anthropic CEO Dario Amodei refused, stating the company “cannot in good conscience accede” to these demands. The Pentagon responded by designating Anthropic a “supply chain risk” — a designation historically reserved for foreign adversaries and unprecedented for an American company. As wmur.com and New Hampshire Public Radio reported, the dispute dominated tech and defense coverage in late February and early March 2026.
Congress had no federal AI legislation on the books. The regulatory vacuum meant that a defense secretary and a startup CEO were negotiating the boundaries of military AI use with no democratic oversight. Sam Liccardo, the Silicon Valley congressman, announced plans to introduce an amendment to the Defense Production Act prohibiting federal agencies from retaliating against vendors who implement safety guardrails. Senator Ron Wyden pledged to “pull out all the stops” fighting the ban. The Anthropic-Pentagon standoff became the story Congress needed to justify action.
Shaheen’s Focus on Anthropic Signals Broader Intent
When Shaheen spoke about the Anthropic controversy at her WMUR-covered hearing, she was not merely summarizing news. She was aligning herself with legislators who see the Pentagon dispute as a catalyst for comprehensive AI regulation. As TechPolicy.Press and Lawfare have argued, Congress — not the Pentagon or private companies — should set rules for military AI use. The current situation involves bilateral negotiation between a defense secretary and a CEO with no durable constraints. Shaheen’s public focus on Anthropic now signals that the Pentagon controversy will be used as a wedge to push legislation the industry has resisted.
Silicon Valley has lobbied against broad AI regulation, preferring voluntary commitments and narrow sector-specific rules. The Anthropic dispute upends that strategy. A company that refused to remove safety guardrails was penalized with a supply-chain designation that threatened its $200 million Pentagon contract and partnerships with defense contractors like Palantir. The irony, as multiple analysts noted: OpenAI reportedly secured its own Pentagon deal under terms that included the same two safety provisions Anthropic was penalized for defending.
What This Actually Means
The industry’s hope that AI regulation could be deferred or diluted is collapsing. Shaheen’s hearing is a preview of how Congress will frame the debate: the Pentagon overreach against Anthropic proves that guardrails cannot be left to executive-branch negotiation. Legislators will use this dispute to argue for statutory limits on military AI, surveillance, and autonomous systems. Silicon Valley feared this moment. It is here.
Background
Who is Jeanne Shaheen? The senior U.S. senator from New Hampshire since 2009, Shaheen is a Democrat and the first woman elected both governor and U.S. senator in the state. She has served on the Senate Armed Services and Appropriations committees.
What is Anthropic? An AI safety company founded by former OpenAI researchers, Anthropic develops the Claude AI model. The company has emphasized constitutional AI and refuse-to-deploy policies for high-risk applications.
Sources
WMUR, New Hampshire Public Radio, TechPolicy.Press, Lawfare, Congressman Sam Liccardo