The Trump administration has made accelerating AI adoption across federal agencies a stated priority. Executive Order 14179, signed in January 2025, revoked Biden-era AI guardrails and directed agencies to develop AI Action Plans removing barriers to deployment. The message from the top was clear: move fast, adopt AI, compete globally.
What has followed is considerably more complicated.
Within the same policy window, the Department of Defense issued a memo labeling Anthropic’s Claude a national security risk and ordering all DoD components to cease use within 180 days. The General Services Administration circulated a draft procurement clause that would require government AI vendors to use only “American AI systems,” hand over data and custom model developments to the government, and hold third parties accountable for compliance. Experts watching both tracks are describing the situation bluntly. Jessica Tillipman of George Washington University Law School called the AI Action Plan “one of the most vendor-friendly overarching policies we’ve ever seen” - and then noted that follow-up agency actions “run in the opposite direction. Both send a really bad message to industry.”
For enterprise AI programs selling to or operating within the federal ecosystem, this contradiction is not an abstract policy debate. It is an active governance risk.
Two Policies, One Administration
The tension is structural. The AI Action Plan framework - a directive to agencies to remove friction from AI deployment - operates at the policy level. Actual procurement rules, security classifications, and contracting clauses operate at the implementation level, where different agencies with different mandates are making independent calls.
The result is a layered incoherence. The Office of Management and Budget’s guidance under M-24-10 established accountability and transparency requirements for AI use in federal agencies. The DoD memo introduces security classification logic that treats foundation model providers as potential adversaries. The GSA draft clause introduces supply chain logic that treats foreign-developed AI as an inherent procurement risk. None of these are coordinated with each other, and none are clearly subordinate to the AI Action Plan.
Rebecca Pselos of Government Procurement Strategies identified the underlying structural problem: government is trying to achieve both innovation velocity and security assurance without having done the collaborative work with industry to figure out how to get both without sacrificing either. That work has not happened. Agencies are filling the gap with unilateral rules.
The DoD-Anthropic Incident
The DoD memo on Anthropic’s Claude matters beyond its immediate operational impact. It establishes a precedent: foundation model providers can be classified as security risks based on their policies around model use - not just their national origin or ownership structure. Anthropic objected publicly to what it described as potential use of its models for mass surveillance and autonomous weapons development, and the DoD’s response was a ban rather than a negotiated use-case framework.
For any enterprise operating AI systems that touch federal networks, this creates a new category of governance exposure. If your AI stack includes foundation models from providers who publicly constrain certain use cases, you now have a documented example of a major agency treating that as a disqualifying condition rather than an acceptable operating parameter.
The NIST AI Risk Management Framework addresses this under its “Govern” function, specifically around AI supply chain risk. NIST recommends organizations map dependencies on AI providers and assess policy alignment as part of procurement due diligence. The DoD memo converts that recommendation into an operational requirement for any vendor hoping to maintain DoD contracts.
What the GSA Draft Clause Actually Requires
The nine-page GSA draft for AI schedule contracts deserves close reading by any technology vendor in or pursuing federal work. Its key provisions:
“American AI systems” requirement. Vendors must use only AI tools that qualify under this definition. The definition itself remains ambiguous in the draft - it does not specify whether this means US-incorporated companies, US-operated infrastructure, or US-origin training data. That ambiguity is itself a governance problem: compliance cannot be verified against an undefined standard.
Government data ownership. Any data processed through vendor AI systems - and any custom model developments built on that data - would be owned by the government. This fundamentally changes the economics of AI-as-a-service in federal contracts. Vendors who build reusable fine-tuned models as part of service delivery would be required to surrender that IP.
Third-party accountability. Vendors are responsible for ensuring their AI subcontractors and tool providers also comply. This creates cascading compliance obligations through the AI supply chain that most vendors have not yet modeled.
Industry observers have noted that these provisions, if finalized as drafted, would push smaller vendors out of the federal AI market entirely and push larger vendors to develop parallel government-specific AI stacks at significant cost - which is exactly the outcome a pro-innovation policy framework should be trying to avoid.
What This Means if You Are Not a Government Contractor
The federal AI governance mess has implications beyond government contracting for a direct reason: federal procurement rules historically migrate into broader regulatory frameworks. Federal Acquisition Regulation clauses have shaped commercial data handling practices for decades. OMB guidance on AI accountability directly influenced how enterprise risk functions approach AI documentation.
The “American AI” framing, the supply chain accountability requirements, and the data ownership provisions being drafted for federal contracts will, over a 12-to-36-month lag, appear as either regulatory requirements or investor-driven expectations in commercial enterprise AI governance.
If your organization is building an AI governance framework now, the federal procurement draft is a preview of the compliance burden your legal and procurement teams will be managing within the next two to three years. Getting ahead of supply chain documentation, provider policy alignment, and data ownership provisions is cheaper to do proactively than retroactively.
Building a Governance Framework for Policy Uncertainty
The lesson from the current federal situation is that policy instability is itself a governance variable - not an external condition to wait out. Organizations that tie their AI governance frameworks exclusively to specific regulatory regimes will find themselves rebuilding those frameworks each election cycle. Organizations that build frameworks on underlying principles fare better.
NIST AI RMF and ISO/IEC 42001:2023 both take this approach. They define governance in terms of accountability structures, risk assessment processes, human oversight mechanisms, and documentation requirements - not in terms of specific regulatory mandates. A framework built to those standards will satisfy most specific regulatory requirements as they crystallize, including the federal procurement rules currently in draft.
Concrete steps that apply regardless of which way federal policy resolves:
- Map every AI tool in your stack to its provider, origin, and applicable use restrictions.
- Document data flows and ownership provisions in all AI-related vendor contracts.
- Assess supply chain risk for each foundation model dependency, including the provider’s own published use policies.
- Build human override and audit capabilities into any AI workflow touching regulated data or regulated parties.
- Track the GSA draft clause and OMB guidance updates as leading indicators of commercial compliance requirements.
Federal AI governance may be sending mixed signals right now. Your governance framework does not have to.
If you are working through AI governance program design or federal compliance alignment, use the contact form to start the conversation.