US Treasury Ends Anthropic Use: Lessons for AI Buyers

US Treasury Ends Anthropic Use: Lessons for AI Buyers

Artificial Intelligence

Anthropic

Mar 4, 2026

In a modern conference room with large windows, four professionals are engaged in discussion around laptops and paperwork, with a flowchart displayed on a monitor in the background, reflecting themes related to "US Treasury Ends Anthropic Use: Lessons for AI Buyers."

Uncertain about how to get started with AI?Evaluate your readiness, potential risks, and key priorities in less than an hour.

Uncertain about how to get started with AI?Evaluate your readiness, potential risks, and key priorities in less than an hour.

➔ Download Our Free AI Preparedness Pack

The U.S. Treasury says it is terminating all use of Anthropic products, including Claude, following a White House directive that has prompted other agencies to phase out Anthropic tools. For organisations using AI at scale, the key lesson is procurement resilience: maintain exit plans, portable workflows, and governance that can survive sudden policy or vendor shifts. (reuters.com)

The U.S. Treasury has announced it will terminate all use of Anthropic products, including Claude. Treasury Secretary Scott Bessent said the move was made at the direction of the President. (reuters.com)

On the surface, this is a single vendor decision. In reality, it’s a stress test for every organisation that’s embedding AI into day‑to‑day operations: what happens when policy, ethics, or supply‑chain positioning changes faster than your rollout?

In this post, we’ll explain what Reuters reported, why it’s unusual, and what teams in the UK and Europe should take from it — even if you never planned to use Claude.

What happened: Treasury ends Anthropic use, others follow

Reuters reports that several U.S. agencies — including State, Treasury and Health and Human Services — are stopping use of Anthropic’s AI products in response to a directive from President Donald Trump. The same reporting says agencies are moving to alternatives, including OpenAI and Google, and that the State Department switched its internal chatbot to OpenAI’s GPT‑4.1.

The decision follows a previous Reuters report that the President directed federal agencies to cease work with Anthropic, with the Pentagon calling the firm a “supply‑chain risk” and setting a six‑month phase‑out.

Separately, Reuters has reported that defence contractors appear to be removing Anthropic tools quickly to avoid jeopardising government contracts, highlighting how fast “policy gravity” spreads across supply chains.

Why this is bigger than one company

Most enterprises evaluate AI vendors on capability, cost, security posture and integration. This story adds another dimension: policy alignment.

Reuters describes the dispute as being tied to military oversight and unresolved issues around use‑case restrictions — in other words, who gets to decide what an AI model can be used for, and where ethical lines sit in national security contexts.

That tension won’t be limited to the U.S. government. Any regulated organisation will face the same core question:

  • Do we want vendors to enforce hard constraints on use?

  • Or do we want governance to sit entirely with the customer (and their regulator)?

Either answer has consequences — and procurement needs to plan for both.

What UK and EU organisations should take from this

Even if your organisation isn’t exposed to U.S. federal procurement rules, three lessons translate directly.

1) Vendor risk is now about “political risk”, not just security risk

When a supplier can be labelled a risk and phased out on a policy timeline, your rollout plan needs contingency built in.

That doesn’t mean “don’t use new AI vendors”. It means:

  • Keep workflows portable

  • Avoid single‑vendor dependency for mission‑critical use cases

  • Maintain a clear exit plan (data, prompts, integrations, and change management)

2) Multi-model strategy is becoming operational, not theoretical

Reuters’ reporting indicates agencies are switching to competitors like OpenAI and Google. The practical implication: the same organisation may run multiple models, depending on governance requirements and use case.

A functional multi-model approach typically includes:

  • A shared “AI profile” and prompt library

  • A routing layer (which model for which task)

  • Standard evaluation criteria (accuracy, safety, cost)

  • Central logging and review for risk

3) Your governance model must handle sudden change

This is the uncomfortable bit: if your AI governance is built around one vendor’s policy and tooling, a forced switch becomes chaotic.

Instead, governance should be vendor‑agnostic:

  • Clear data classes (what’s allowed, what’s not)

  • Standard approval paths for high‑risk use cases

  • Templates for DPIAs / risk assessments

  • A quarterly model review (capability, controls, compliance)

A practical checklist: “AI exit planning” in 60 minutes

If you want a fast internal audit, here are five questions to answer in one working session:

  1. Where is AI embedded today? (teams, workflows, automations)

  2. What data touches the model? (PII, confidential, regulated)

  3. What would break if we switched vendors in 30 days?

  4. Do we have portable assets? (prompt library, AI profile, evaluation set)

  5. Who owns the decision and comms? (IT, security, legal, leadership)

You don’t need perfection — you need readiness.

What to watch next

Reuters indicates Anthropic plans to challenge the ban, and legal experts have questioned the government’s authority to impose broad restrictions that spill into commercial contractor use. How this plays out will shape how comfortable buyers feel with relying on a single vendor for critical work.

For now, the headline lesson is simple: AI strategy isn’t just model selection. It’s operational resilience.

Next steps

If your organisation is rolling out AI at scale, consider:

  • Building a portable prompt and evaluation library

  • Defining a multi-model policy (when, why, and how)

  • Creating an exit plan for any AI supplier used in core workflows

Generation Digital can help you design a governed, multi-model operating model — so you can adopt AI confidently without becoming dependent on a single vendor or policy regime.

FAQs

Why did Treasury stop using Anthropic products?
Reuters reports the decision followed a White House directive to phase out Anthropic products, amid a dispute tied to military oversight and use‑case restrictions. (reuters.com)

Which Anthropic products are affected?
Treasury said it is terminating all use of Anthropic products, including the Claude platform. (reuters.com)

What are agencies switching to instead?
Reuters reports agencies are pivoting to alternatives including OpenAI and Google, and that the State Department switched its internal chatbot to OpenAI’s GPT‑4.1. (reuters.com)

What should enterprises do to reduce AI vendor risk?
Build portability (prompt libraries, AI profiles), adopt a multi‑model strategy for critical workflows, and maintain an exit plan that covers data, integrations and change management.

Does this affect UK organisations?
Not directly — but it’s a strong signal that AI procurement now includes policy and geopolitical risk, and that sudden vendor shifts are a realistic scenario.

The U.S. Treasury says it is terminating all use of Anthropic products, including Claude, following a White House directive that has prompted other agencies to phase out Anthropic tools. For organisations using AI at scale, the key lesson is procurement resilience: maintain exit plans, portable workflows, and governance that can survive sudden policy or vendor shifts. (reuters.com)

The U.S. Treasury has announced it will terminate all use of Anthropic products, including Claude. Treasury Secretary Scott Bessent said the move was made at the direction of the President. (reuters.com)

On the surface, this is a single vendor decision. In reality, it’s a stress test for every organisation that’s embedding AI into day‑to‑day operations: what happens when policy, ethics, or supply‑chain positioning changes faster than your rollout?

In this post, we’ll explain what Reuters reported, why it’s unusual, and what teams in the UK and Europe should take from it — even if you never planned to use Claude.

What happened: Treasury ends Anthropic use, others follow

Reuters reports that several U.S. agencies — including State, Treasury and Health and Human Services — are stopping use of Anthropic’s AI products in response to a directive from President Donald Trump. The same reporting says agencies are moving to alternatives, including OpenAI and Google, and that the State Department switched its internal chatbot to OpenAI’s GPT‑4.1.

The decision follows a previous Reuters report that the President directed federal agencies to cease work with Anthropic, with the Pentagon calling the firm a “supply‑chain risk” and setting a six‑month phase‑out.

Separately, Reuters has reported that defence contractors appear to be removing Anthropic tools quickly to avoid jeopardising government contracts, highlighting how fast “policy gravity” spreads across supply chains.

Why this is bigger than one company

Most enterprises evaluate AI vendors on capability, cost, security posture and integration. This story adds another dimension: policy alignment.

Reuters describes the dispute as being tied to military oversight and unresolved issues around use‑case restrictions — in other words, who gets to decide what an AI model can be used for, and where ethical lines sit in national security contexts.

That tension won’t be limited to the U.S. government. Any regulated organisation will face the same core question:

  • Do we want vendors to enforce hard constraints on use?

  • Or do we want governance to sit entirely with the customer (and their regulator)?

Either answer has consequences — and procurement needs to plan for both.

What UK and EU organisations should take from this

Even if your organisation isn’t exposed to U.S. federal procurement rules, three lessons translate directly.

1) Vendor risk is now about “political risk”, not just security risk

When a supplier can be labelled a risk and phased out on a policy timeline, your rollout plan needs contingency built in.

That doesn’t mean “don’t use new AI vendors”. It means:

  • Keep workflows portable

  • Avoid single‑vendor dependency for mission‑critical use cases

  • Maintain a clear exit plan (data, prompts, integrations, and change management)

2) Multi-model strategy is becoming operational, not theoretical

Reuters’ reporting indicates agencies are switching to competitors like OpenAI and Google. The practical implication: the same organisation may run multiple models, depending on governance requirements and use case.

A functional multi-model approach typically includes:

  • A shared “AI profile” and prompt library

  • A routing layer (which model for which task)

  • Standard evaluation criteria (accuracy, safety, cost)

  • Central logging and review for risk

3) Your governance model must handle sudden change

This is the uncomfortable bit: if your AI governance is built around one vendor’s policy and tooling, a forced switch becomes chaotic.

Instead, governance should be vendor‑agnostic:

  • Clear data classes (what’s allowed, what’s not)

  • Standard approval paths for high‑risk use cases

  • Templates for DPIAs / risk assessments

  • A quarterly model review (capability, controls, compliance)

A practical checklist: “AI exit planning” in 60 minutes

If you want a fast internal audit, here are five questions to answer in one working session:

  1. Where is AI embedded today? (teams, workflows, automations)

  2. What data touches the model? (PII, confidential, regulated)

  3. What would break if we switched vendors in 30 days?

  4. Do we have portable assets? (prompt library, AI profile, evaluation set)

  5. Who owns the decision and comms? (IT, security, legal, leadership)

You don’t need perfection — you need readiness.

What to watch next

Reuters indicates Anthropic plans to challenge the ban, and legal experts have questioned the government’s authority to impose broad restrictions that spill into commercial contractor use. How this plays out will shape how comfortable buyers feel with relying on a single vendor for critical work.

For now, the headline lesson is simple: AI strategy isn’t just model selection. It’s operational resilience.

Next steps

If your organisation is rolling out AI at scale, consider:

  • Building a portable prompt and evaluation library

  • Defining a multi-model policy (when, why, and how)

  • Creating an exit plan for any AI supplier used in core workflows

Generation Digital can help you design a governed, multi-model operating model — so you can adopt AI confidently without becoming dependent on a single vendor or policy regime.

FAQs

Why did Treasury stop using Anthropic products?
Reuters reports the decision followed a White House directive to phase out Anthropic products, amid a dispute tied to military oversight and use‑case restrictions. (reuters.com)

Which Anthropic products are affected?
Treasury said it is terminating all use of Anthropic products, including the Claude platform. (reuters.com)

What are agencies switching to instead?
Reuters reports agencies are pivoting to alternatives including OpenAI and Google, and that the State Department switched its internal chatbot to OpenAI’s GPT‑4.1. (reuters.com)

What should enterprises do to reduce AI vendor risk?
Build portability (prompt libraries, AI profiles), adopt a multi‑model strategy for critical workflows, and maintain an exit plan that covers data, integrations and change management.

Does this affect UK organisations?
Not directly — but it’s a strong signal that AI procurement now includes policy and geopolitical risk, and that sudden vendor shifts are a realistic scenario.

Receive weekly AI news and advice straight to your inbox

By subscribing, you agree to allow Generation Digital to store and process your information according to our privacy policy. You can review the full policy at gend.co/privacy.

Generation
Digital

Canadian Office
33 Queen St,
Toronto
M5H 2N2
Canada

Canadian Office
1 University Ave,
Toronto,
ON M5J 1T1,
Canada

NAMER Office
77 Sands St,
Brooklyn,
NY 11201,
USA

Head Office
Charlemont St, Saint Kevin's, Dublin,
D02 VN88,
Ireland

Middle East Office
6994 Alsharq 3890,
An Narjis,
Riyadh 13343,
Saudi Arabia

UK Fast Growth Index UBS Logo
Financial Times FT 1000 Logo
Febe Growth 100 Logo (Background Removed)

Business Number: 256 9431 77 | Copyright 2026 | Terms and Conditions | Privacy Policy

Generation
Digital

Canadian Office
33 Queen St,
Toronto
M5H 2N2
Canada

Canadian Office
1 University Ave,
Toronto,
ON M5J 1T1,
Canada

NAMER Office
77 Sands St,
Brooklyn,
NY 11201,
USA

Head Office
Charlemont St, Saint Kevin's, Dublin,
D02 VN88,
Ireland

Middle East Office
6994 Alsharq 3890,
An Narjis,
Riyadh 13343,
Saudi Arabia

UK Fast Growth Index UBS Logo
Financial Times FT 1000 Logo
Febe Growth 100 Logo (Background Removed)


Business No: 256 9431 77
Terms and Conditions
Privacy Policy
© 2026