US Treasury Ends Anthropic Use: Lessons for AI Buyers
US Treasury Ends Anthropic Use: Lessons for AI Buyers
Inteligencia Artificial
Antropico
4 mar 2026

¿No sabes por dónde empezar con la IA?Evalúa preparación, riesgos y prioridades en menos de una hora.
¿No sabes por dónde empezar con la IA?Evalúa preparación, riesgos y prioridades en menos de una hora.
➔ Descarga nuestro paquete gratuito de preparación para IA
The U.S. Treasury says it is terminating all use of Anthropic products, including Claude, following a White House directive that has prompted other agencies to phase out Anthropic tools. For organisations using AI at scale, the key lesson is procurement resilience: maintain exit plans, portable workflows, and governance that can survive sudden policy or vendor shifts. (reuters.com)
The U.S. Treasury has announced it will terminate all use of Anthropic products, including Claude. Treasury Secretary Scott Bessent said the move was made at the direction of the President. (reuters.com)
On the surface, this is a single vendor decision. In reality, it’s a stress test for every organisation that’s embedding AI into day‑to‑day operations: what happens when policy, ethics, or supply‑chain positioning changes faster than your rollout?
In this post, we’ll explain what Reuters reported, why it’s unusual, and what teams in the UK and Europe should take from it — even if you never planned to use Claude.
What happened: Treasury ends Anthropic use, others follow
Reuters reports that several U.S. agencies — including State, Treasury and Health and Human Services — are stopping use of Anthropic’s AI products in response to a directive from President Donald Trump. The same reporting says agencies are moving to alternatives, including OpenAI and Google, and that the State Department switched its internal chatbot to OpenAI’s GPT‑4.1.
The decision follows a previous Reuters report that the President directed federal agencies to cease work with Anthropic, with the Pentagon calling the firm a “supply‑chain risk” and setting a six‑month phase‑out.
Separately, Reuters has reported that defence contractors appear to be removing Anthropic tools quickly to avoid jeopardising government contracts, highlighting how fast “policy gravity” spreads across supply chains.
Why this is bigger than one company
Most enterprises evaluate AI vendors on capability, cost, security posture and integration. This story adds another dimension: policy alignment.
Reuters describes the dispute as being tied to military oversight and unresolved issues around use‑case restrictions — in other words, who gets to decide what an AI model can be used for, and where ethical lines sit in national security contexts.
That tension won’t be limited to the U.S. government. Any regulated organisation will face the same core question:
Do we want vendors to enforce hard constraints on use?
Or do we want governance to sit entirely with the customer (and their regulator)?
Either answer has consequences — and procurement needs to plan for both.
What UK and EU organisations should take from this
Even if your organisation isn’t exposed to U.S. federal procurement rules, three lessons translate directly.
1) Vendor risk is now about “political risk”, not just security risk
When a supplier can be labelled a risk and phased out on a policy timeline, your rollout plan needs contingency built in.
That doesn’t mean “don’t use new AI vendors”. It means:
Keep workflows portable
Avoid single‑vendor dependency for mission‑critical use cases
Maintain a clear exit plan (data, prompts, integrations, and change management)
2) Multi-model strategy is becoming operational, not theoretical
Reuters’ reporting indicates agencies are switching to competitors like OpenAI and Google. The practical implication: the same organisation may run multiple models, depending on governance requirements and use case.
A functional multi-model approach typically includes:
A shared “AI profile” and prompt library
A routing layer (which model for which task)
Standard evaluation criteria (accuracy, safety, cost)
Central logging and review for risk
3) Your governance model must handle sudden change
This is the uncomfortable bit: if your AI governance is built around one vendor’s policy and tooling, a forced switch becomes chaotic.
Instead, governance should be vendor‑agnostic:
Clear data classes (what’s allowed, what’s not)
Standard approval paths for high‑risk use cases
Templates for DPIAs / risk assessments
A quarterly model review (capability, controls, compliance)
A practical checklist: “AI exit planning” in 60 minutes
If you want a fast internal audit, here are five questions to answer in one working session:
Where is AI embedded today? (teams, workflows, automations)
What data touches the model? (PII, confidential, regulated)
What would break if we switched vendors in 30 days?
Do we have portable assets? (prompt library, AI profile, evaluation set)
Who owns the decision and comms? (IT, security, legal, leadership)
You don’t need perfection — you need readiness.
What to watch next
Reuters indicates Anthropic plans to challenge the ban, and legal experts have questioned the government’s authority to impose broad restrictions that spill into commercial contractor use. How this plays out will shape how comfortable buyers feel with relying on a single vendor for critical work.
For now, the headline lesson is simple: AI strategy isn’t just model selection. It’s operational resilience.
Next steps
If your organisation is rolling out AI at scale, consider:
Building a portable prompt and evaluation library
Defining a multi-model policy (when, why, and how)
Creating an exit plan for any AI supplier used in core workflows
Generation Digital can help you design a governed, multi-model operating model — so you can adopt AI confidently without becoming dependent on a single vendor or policy regime.
FAQs
Why did Treasury stop using Anthropic products?
Reuters reports the decision followed a White House directive to phase out Anthropic products, amid a dispute tied to military oversight and use‑case restrictions. (reuters.com)
Which Anthropic products are affected?
Treasury said it is terminating all use of Anthropic products, including the Claude platform. (reuters.com)
What are agencies switching to instead?
Reuters reports agencies are pivoting to alternatives including OpenAI and Google, and that the State Department switched its internal chatbot to OpenAI’s GPT‑4.1. (reuters.com)
What should enterprises do to reduce AI vendor risk?
Build portability (prompt libraries, AI profiles), adopt a multi‑model strategy for critical workflows, and maintain an exit plan that covers data, integrations and change management.
Does this affect UK organisations?
Not directly — but it’s a strong signal that AI procurement now includes policy and geopolitical risk, and that sudden vendor shifts are a realistic scenario.
The U.S. Treasury says it is terminating all use of Anthropic products, including Claude, following a White House directive that has prompted other agencies to phase out Anthropic tools. For organisations using AI at scale, the key lesson is procurement resilience: maintain exit plans, portable workflows, and governance that can survive sudden policy or vendor shifts. (reuters.com)
The U.S. Treasury has announced it will terminate all use of Anthropic products, including Claude. Treasury Secretary Scott Bessent said the move was made at the direction of the President. (reuters.com)
On the surface, this is a single vendor decision. In reality, it’s a stress test for every organisation that’s embedding AI into day‑to‑day operations: what happens when policy, ethics, or supply‑chain positioning changes faster than your rollout?
In this post, we’ll explain what Reuters reported, why it’s unusual, and what teams in the UK and Europe should take from it — even if you never planned to use Claude.
What happened: Treasury ends Anthropic use, others follow
Reuters reports that several U.S. agencies — including State, Treasury and Health and Human Services — are stopping use of Anthropic’s AI products in response to a directive from President Donald Trump. The same reporting says agencies are moving to alternatives, including OpenAI and Google, and that the State Department switched its internal chatbot to OpenAI’s GPT‑4.1.
The decision follows a previous Reuters report that the President directed federal agencies to cease work with Anthropic, with the Pentagon calling the firm a “supply‑chain risk” and setting a six‑month phase‑out.
Separately, Reuters has reported that defence contractors appear to be removing Anthropic tools quickly to avoid jeopardising government contracts, highlighting how fast “policy gravity” spreads across supply chains.
Why this is bigger than one company
Most enterprises evaluate AI vendors on capability, cost, security posture and integration. This story adds another dimension: policy alignment.
Reuters describes the dispute as being tied to military oversight and unresolved issues around use‑case restrictions — in other words, who gets to decide what an AI model can be used for, and where ethical lines sit in national security contexts.
That tension won’t be limited to the U.S. government. Any regulated organisation will face the same core question:
Do we want vendors to enforce hard constraints on use?
Or do we want governance to sit entirely with the customer (and their regulator)?
Either answer has consequences — and procurement needs to plan for both.
What UK and EU organisations should take from this
Even if your organisation isn’t exposed to U.S. federal procurement rules, three lessons translate directly.
1) Vendor risk is now about “political risk”, not just security risk
When a supplier can be labelled a risk and phased out on a policy timeline, your rollout plan needs contingency built in.
That doesn’t mean “don’t use new AI vendors”. It means:
Keep workflows portable
Avoid single‑vendor dependency for mission‑critical use cases
Maintain a clear exit plan (data, prompts, integrations, and change management)
2) Multi-model strategy is becoming operational, not theoretical
Reuters’ reporting indicates agencies are switching to competitors like OpenAI and Google. The practical implication: the same organisation may run multiple models, depending on governance requirements and use case.
A functional multi-model approach typically includes:
A shared “AI profile” and prompt library
A routing layer (which model for which task)
Standard evaluation criteria (accuracy, safety, cost)
Central logging and review for risk
3) Your governance model must handle sudden change
This is the uncomfortable bit: if your AI governance is built around one vendor’s policy and tooling, a forced switch becomes chaotic.
Instead, governance should be vendor‑agnostic:
Clear data classes (what’s allowed, what’s not)
Standard approval paths for high‑risk use cases
Templates for DPIAs / risk assessments
A quarterly model review (capability, controls, compliance)
A practical checklist: “AI exit planning” in 60 minutes
If you want a fast internal audit, here are five questions to answer in one working session:
Where is AI embedded today? (teams, workflows, automations)
What data touches the model? (PII, confidential, regulated)
What would break if we switched vendors in 30 days?
Do we have portable assets? (prompt library, AI profile, evaluation set)
Who owns the decision and comms? (IT, security, legal, leadership)
You don’t need perfection — you need readiness.
What to watch next
Reuters indicates Anthropic plans to challenge the ban, and legal experts have questioned the government’s authority to impose broad restrictions that spill into commercial contractor use. How this plays out will shape how comfortable buyers feel with relying on a single vendor for critical work.
For now, the headline lesson is simple: AI strategy isn’t just model selection. It’s operational resilience.
Next steps
If your organisation is rolling out AI at scale, consider:
Building a portable prompt and evaluation library
Defining a multi-model policy (when, why, and how)
Creating an exit plan for any AI supplier used in core workflows
Generation Digital can help you design a governed, multi-model operating model — so you can adopt AI confidently without becoming dependent on a single vendor or policy regime.
FAQs
Why did Treasury stop using Anthropic products?
Reuters reports the decision followed a White House directive to phase out Anthropic products, amid a dispute tied to military oversight and use‑case restrictions. (reuters.com)
Which Anthropic products are affected?
Treasury said it is terminating all use of Anthropic products, including the Claude platform. (reuters.com)
What are agencies switching to instead?
Reuters reports agencies are pivoting to alternatives including OpenAI and Google, and that the State Department switched its internal chatbot to OpenAI’s GPT‑4.1. (reuters.com)
What should enterprises do to reduce AI vendor risk?
Build portability (prompt libraries, AI profiles), adopt a multi‑model strategy for critical workflows, and maintain an exit plan that covers data, integrations and change management.
Does this affect UK organisations?
Not directly — but it’s a strong signal that AI procurement now includes policy and geopolitical risk, and that sudden vendor shifts are a realistic scenario.
Recibe noticias y consejos sobre IA cada semana en tu bandeja de entrada
Al suscribirte, das tu consentimiento para que Generation Digital almacene y procese tus datos de acuerdo con nuestra política de privacidad. Puedes leer la política completa en gend.co/privacy.
Generación
Digital

Oficina en Reino Unido
Generation Digital Ltd
33 Queen St,
Londres
EC4R 1AP
Reino Unido
Oficina en Canadá
Generation Digital Americas Inc
181 Bay St., Suite 1800
Toronto, ON, M5J 2T9
Canadá
Oficina en EE. UU.
Generation Digital Américas Inc
77 Sands St,
Brooklyn, NY 11201,
Estados Unidos
Oficina de la UE
Software Generación Digital
Edificio Elgee
Dundalk
A91 X2R3
Irlanda
Oficina en Medio Oriente
6994 Alsharq 3890,
An Narjis,
Riad 13343,
Arabia Saudita
Número de la empresa: 256 9431 77 | Derechos de autor 2026 | Términos y Condiciones | Política de Privacidad
Generación
Digital

Oficina en Reino Unido
Generation Digital Ltd
33 Queen St,
Londres
EC4R 1AP
Reino Unido
Oficina en Canadá
Generation Digital Americas Inc
181 Bay St., Suite 1800
Toronto, ON, M5J 2T9
Canadá
Oficina en EE. UU.
Generation Digital Américas Inc
77 Sands St,
Brooklyn, NY 11201,
Estados Unidos
Oficina de la UE
Software Generación Digital
Edificio Elgee
Dundalk
A91 X2R3
Irlanda
Oficina en Medio Oriente
6994 Alsharq 3890,
An Narjis,
Riad 13343,
Arabia Saudita








