Jobs Most Exposed to AI: What Anthropic’s Tracker Means
Jobs Most Exposed to AI: What Anthropic’s Tracker Means
Anthropic
Mar 9, 2026

Free AI at Work Playbook for managers using ChatGPT, Claude and Gemini.
Free AI at Work Playbook for managers using ChatGPT, Claude and Gemini.
➔ Download the Playbook
“Jobs most exposed to AI” are roles where a high share of day-to-day tasks are suitable for large language models and we can see those models being used for work in practice. Anthropic calls this observed exposure — a way to compare what AI could do with what people actually use it for, giving an early signal of where disruption may arrive first.
If you’re trying to make sense of AI and jobs, you’ve probably seen two extremes.
On one side: “AI will replace everyone.” On the other: “AI is just another productivity tool.”
Neither is helpful when you’re responsible for hiring plans, capability building, or running a service team.
In March 2026, Anthropic published a new way to track where disruption might show up first — and Business Insider highlighted the result in a simple graph. The key idea is a metric Anthropic calls observed exposure: it compares what large language models are theoretically capable of doing with what workers are actually using Claude for at work.
That nuance matters because it gives us a more realistic answer to the question leaders keep asking:
Which jobs are exposed to AI first, and what should we do about it?
The headline: exposure is real — but usage still lags capability
Anthropic’s economists argue that actual AI usage hasn’t come close to tapping the full potential of large language models. That sounds reassuring, but it’s also a warning.
When a technology is capable of more than we are currently using it for, change usually doesn’t arrive in a straight line. It arrives in steps: integration into existing tools, workflow redesign, then operating-model shifts.
If you’re a UK/EU employer, the right takeaway isn’t “panic” or “ignore it”. It’s:
Expect uneven impact by role and seniority
Prepare for entry-level change first
Treat knowledge and workflow design as your leverage points
Which occupations look most exposed (in Anthropic’s early data)
In the Business Insider summary of Anthropic’s findings, the five most exposed occupations were:
Computer programmers
Customer service representatives
Data entry keyers
Medical record specialists
Market research analysts and marketing specialists
The pattern is not “white collar vs blue collar”. It’s simpler:
Roles are exposed when they contain repeatable, language-heavy tasks that can be routed through a model, checked quickly, and fed back into a system.
That’s why coding appears so prominently. It’s also why administrative work, service work, and analysis work show up.
What “observed exposure” gets right (and where it can mislead)
What it gets right
1) It focuses on tasks, not job titles
Most job loss debates are too coarse. A job isn’t a single activity — it’s a bundle of tasks. AI rarely replaces the entire bundle at once.
2) It separates capability from adoption
“AI can do X” is different from “AI is being used to do X”. Adoption depends on tools, approvals, integration, and risk appetite.
3) It offers an ‘early warning’ signal
If a role shows high observed exposure today, it’s a good candidate for workflow redesign and skill shifts over the next 6–24 months.
Where it can mislead
1) Exposure isn’t the same as displacement
High exposure often means higher productivity first — and that can create different outcomes: fewer hires, different role mixes, or faster output.
2) Some tasks are blocked by reality
Even if a model can write a legal argument, you still have courts, process, accountability, and professional standards. “Technically possible” doesn’t mean “permitted” or “wise”.
3) Context matters
A public-sector call centre, a bank, and a university support desk all have different constraints. The same ‘exposed’ role behaves differently depending on regulation and risk.
The quiet shift: entry-level work is the pressure point
One of the most important lines in the Business Insider piece is that Anthropic found no significant change in unemployment for workers in the most exposed occupations — but there is “suggestive evidence” that hiring for young workers in those fields has slowed.
If you’re seeing a tougher graduate market in the UK and Europe, don’t attribute everything to AI. But do recognise the likely mechanism:
AI tools increase the output of experienced workers
Organisations capture efficiency by reducing junior headcount growth
The ‘apprenticeship layer’ thins out
That creates a long-term risk: if we remove entry-level learning opportunities, we reduce the pipeline of future expertise.
This is why the best workforce strategies are not “replace juniors with AI”. They are “redesign junior roles so they create value and build capability.”
A practical playbook for UK/EU employers
Here’s a way to respond that’s responsible, measurable, and doesn’t require a crystal ball.
Step 1: Map tasks in your ‘exposed’ roles
Pick 3–5 roles you suspect are exposed (often: service, admin, marketing ops, analysts, junior engineers).
For each role, list:
tasks that are repeated daily/weekly
tasks that are language-heavy (writing, summarising, searching)
tasks where errors are easy to catch
tasks that touch sensitive data or regulated decisions
That becomes your prioritised “AI task backlog”.
Internal link suggestion (in-copy): link to a gend.co guide on AI adoption and change management to make task mapping stick beyond a workshop.
Step 2: Fix your knowledge foundation
AI performance is limited by what your organisation can retrieve.
If policies, templates, playbooks, and ‘the right answer’ live across drives, inboxes, and tribal memory, AI will accelerate confusion.
Your leverage is:
a single source of truth
clear ownership and review cadence
permissions aligned to risk
This is the unglamorous work that makes AI genuinely useful.
Internal link suggestion (in-copy): link to your KM/connected knowledge content on gend.co.
Step 3: Embed AI into workflows — not as a side tool
Start with use cases that are easy to validate:
meeting notes and action capture
drafting first versions of comms and policies
summarising customer tickets and common issues
code scaffolding and test generation
extracting key fields from documents (where permitted)
Make adoption easy by building templates and guardrails into the place people already work.
Step 4: Put governance in place that enables speed
You want a system where people can move quickly without creating risk.
Minimum viable governance includes:
data sensitivity tiers (what must not go into prompts)
approved tools and approved contexts
guidance on attribution and verification
publishing controls for “official” content
a feedback loop for AI mistakes
In the UK/EU, align with your existing compliance posture and be explicit about where human sign-off is required.
Step 5: Measure the right outcomes
Avoid vanity metrics like “prompt volume”.
Track:
time-to-first-draft
first-contact resolution in service teams
reduction in repeat enquiries
cycle times (marketing production, reporting, code review)
hiring mix changes (especially at junior levels)
These are the metrics that tell you whether AI is improving work — and where you may be quietly reshaping your talent pipeline.
What this means for individuals (without the doom)
If your role is on a “most exposed” list, it doesn’t mean you’re replaceable. It means parts of your work are likely to be redesigned.
The safest career moves tend to be:
build skill in specifying work (briefing, prompting, acceptance criteria)
learn verification (checking sources, testing outputs, evaluating quality)
become the person who knows the process end-to-end (not just one step)
develop judgement in ambiguous situations (the part AI struggles with)
Summary: use exposure data as a planning tool, not a prophecy
Anthropic’s observed exposure approach is useful because it tells us where AI is actually being used at work — and where task redesign might happen first.
The urgent challenge for employers isn’t mass redundancy. It’s role redesign and capability building, especially for entry-level pathways.
If you get that right, you don’t just “protect jobs”. You build a workforce that can use AI responsibly, improve service quality, and move faster without losing accountability.
Next steps
Audit your top 3 exposed roles and map tasks.
Consolidate knowledge into a source of truth with clear ownership.
Pilot 1–2 low-risk workflows with measurable outcomes.
Create an entry-level strategy: what juniors do in an AI-enabled team.
If you want help designing the operating model — from knowledge foundations to governed rollout — Generation Digital can support architecture, enablement, and adoption.
FAQ
Q1. What are the jobs most exposed to AI?
Jobs are most exposed when many day-to-day tasks are language-heavy, repeatable, and feasible for AI — and we can observe AI being used for those tasks in practice.
Q2. Does ‘exposed’ mean my job will be replaced?
Not necessarily. Exposure usually means parts of the role will change first: drafts, summaries, routine analysis, and support tasks. Displacement depends on adoption, regulation, and workflow redesign.
Q3. Why are entry-level roles more vulnerable?
AI often boosts the productivity of experienced workers, so organisations may hire fewer juniors unless roles are redesigned to create value and learning opportunities.
Q4. What should employers measure to understand impact?
Cycle times, quality, service resolution rates, reduction in repeat enquiries, and changes in hiring mix by seniority are more meaningful than ‘AI usage’ metrics.
Q5. How do we reduce risk when rolling out AI?
Use governed tools, define data rules, make knowledge easy to retrieve, and build verification habits (humans remain accountable for decisions and published guidance).
“Jobs most exposed to AI” are roles where a high share of day-to-day tasks are suitable for large language models and we can see those models being used for work in practice. Anthropic calls this observed exposure — a way to compare what AI could do with what people actually use it for, giving an early signal of where disruption may arrive first.
If you’re trying to make sense of AI and jobs, you’ve probably seen two extremes.
On one side: “AI will replace everyone.” On the other: “AI is just another productivity tool.”
Neither is helpful when you’re responsible for hiring plans, capability building, or running a service team.
In March 2026, Anthropic published a new way to track where disruption might show up first — and Business Insider highlighted the result in a simple graph. The key idea is a metric Anthropic calls observed exposure: it compares what large language models are theoretically capable of doing with what workers are actually using Claude for at work.
That nuance matters because it gives us a more realistic answer to the question leaders keep asking:
Which jobs are exposed to AI first, and what should we do about it?
The headline: exposure is real — but usage still lags capability
Anthropic’s economists argue that actual AI usage hasn’t come close to tapping the full potential of large language models. That sounds reassuring, but it’s also a warning.
When a technology is capable of more than we are currently using it for, change usually doesn’t arrive in a straight line. It arrives in steps: integration into existing tools, workflow redesign, then operating-model shifts.
If you’re a UK/EU employer, the right takeaway isn’t “panic” or “ignore it”. It’s:
Expect uneven impact by role and seniority
Prepare for entry-level change first
Treat knowledge and workflow design as your leverage points
Which occupations look most exposed (in Anthropic’s early data)
In the Business Insider summary of Anthropic’s findings, the five most exposed occupations were:
Computer programmers
Customer service representatives
Data entry keyers
Medical record specialists
Market research analysts and marketing specialists
The pattern is not “white collar vs blue collar”. It’s simpler:
Roles are exposed when they contain repeatable, language-heavy tasks that can be routed through a model, checked quickly, and fed back into a system.
That’s why coding appears so prominently. It’s also why administrative work, service work, and analysis work show up.
What “observed exposure” gets right (and where it can mislead)
What it gets right
1) It focuses on tasks, not job titles
Most job loss debates are too coarse. A job isn’t a single activity — it’s a bundle of tasks. AI rarely replaces the entire bundle at once.
2) It separates capability from adoption
“AI can do X” is different from “AI is being used to do X”. Adoption depends on tools, approvals, integration, and risk appetite.
3) It offers an ‘early warning’ signal
If a role shows high observed exposure today, it’s a good candidate for workflow redesign and skill shifts over the next 6–24 months.
Where it can mislead
1) Exposure isn’t the same as displacement
High exposure often means higher productivity first — and that can create different outcomes: fewer hires, different role mixes, or faster output.
2) Some tasks are blocked by reality
Even if a model can write a legal argument, you still have courts, process, accountability, and professional standards. “Technically possible” doesn’t mean “permitted” or “wise”.
3) Context matters
A public-sector call centre, a bank, and a university support desk all have different constraints. The same ‘exposed’ role behaves differently depending on regulation and risk.
The quiet shift: entry-level work is the pressure point
One of the most important lines in the Business Insider piece is that Anthropic found no significant change in unemployment for workers in the most exposed occupations — but there is “suggestive evidence” that hiring for young workers in those fields has slowed.
If you’re seeing a tougher graduate market in the UK and Europe, don’t attribute everything to AI. But do recognise the likely mechanism:
AI tools increase the output of experienced workers
Organisations capture efficiency by reducing junior headcount growth
The ‘apprenticeship layer’ thins out
That creates a long-term risk: if we remove entry-level learning opportunities, we reduce the pipeline of future expertise.
This is why the best workforce strategies are not “replace juniors with AI”. They are “redesign junior roles so they create value and build capability.”
A practical playbook for UK/EU employers
Here’s a way to respond that’s responsible, measurable, and doesn’t require a crystal ball.
Step 1: Map tasks in your ‘exposed’ roles
Pick 3–5 roles you suspect are exposed (often: service, admin, marketing ops, analysts, junior engineers).
For each role, list:
tasks that are repeated daily/weekly
tasks that are language-heavy (writing, summarising, searching)
tasks where errors are easy to catch
tasks that touch sensitive data or regulated decisions
That becomes your prioritised “AI task backlog”.
Internal link suggestion (in-copy): link to a gend.co guide on AI adoption and change management to make task mapping stick beyond a workshop.
Step 2: Fix your knowledge foundation
AI performance is limited by what your organisation can retrieve.
If policies, templates, playbooks, and ‘the right answer’ live across drives, inboxes, and tribal memory, AI will accelerate confusion.
Your leverage is:
a single source of truth
clear ownership and review cadence
permissions aligned to risk
This is the unglamorous work that makes AI genuinely useful.
Internal link suggestion (in-copy): link to your KM/connected knowledge content on gend.co.
Step 3: Embed AI into workflows — not as a side tool
Start with use cases that are easy to validate:
meeting notes and action capture
drafting first versions of comms and policies
summarising customer tickets and common issues
code scaffolding and test generation
extracting key fields from documents (where permitted)
Make adoption easy by building templates and guardrails into the place people already work.
Step 4: Put governance in place that enables speed
You want a system where people can move quickly without creating risk.
Minimum viable governance includes:
data sensitivity tiers (what must not go into prompts)
approved tools and approved contexts
guidance on attribution and verification
publishing controls for “official” content
a feedback loop for AI mistakes
In the UK/EU, align with your existing compliance posture and be explicit about where human sign-off is required.
Step 5: Measure the right outcomes
Avoid vanity metrics like “prompt volume”.
Track:
time-to-first-draft
first-contact resolution in service teams
reduction in repeat enquiries
cycle times (marketing production, reporting, code review)
hiring mix changes (especially at junior levels)
These are the metrics that tell you whether AI is improving work — and where you may be quietly reshaping your talent pipeline.
What this means for individuals (without the doom)
If your role is on a “most exposed” list, it doesn’t mean you’re replaceable. It means parts of your work are likely to be redesigned.
The safest career moves tend to be:
build skill in specifying work (briefing, prompting, acceptance criteria)
learn verification (checking sources, testing outputs, evaluating quality)
become the person who knows the process end-to-end (not just one step)
develop judgement in ambiguous situations (the part AI struggles with)
Summary: use exposure data as a planning tool, not a prophecy
Anthropic’s observed exposure approach is useful because it tells us where AI is actually being used at work — and where task redesign might happen first.
The urgent challenge for employers isn’t mass redundancy. It’s role redesign and capability building, especially for entry-level pathways.
If you get that right, you don’t just “protect jobs”. You build a workforce that can use AI responsibly, improve service quality, and move faster without losing accountability.
Next steps
Audit your top 3 exposed roles and map tasks.
Consolidate knowledge into a source of truth with clear ownership.
Pilot 1–2 low-risk workflows with measurable outcomes.
Create an entry-level strategy: what juniors do in an AI-enabled team.
If you want help designing the operating model — from knowledge foundations to governed rollout — Generation Digital can support architecture, enablement, and adoption.
FAQ
Q1. What are the jobs most exposed to AI?
Jobs are most exposed when many day-to-day tasks are language-heavy, repeatable, and feasible for AI — and we can observe AI being used for those tasks in practice.
Q2. Does ‘exposed’ mean my job will be replaced?
Not necessarily. Exposure usually means parts of the role will change first: drafts, summaries, routine analysis, and support tasks. Displacement depends on adoption, regulation, and workflow redesign.
Q3. Why are entry-level roles more vulnerable?
AI often boosts the productivity of experienced workers, so organisations may hire fewer juniors unless roles are redesigned to create value and learning opportunities.
Q4. What should employers measure to understand impact?
Cycle times, quality, service resolution rates, reduction in repeat enquiries, and changes in hiring mix by seniority are more meaningful than ‘AI usage’ metrics.
Q5. How do we reduce risk when rolling out AI?
Use governed tools, define data rules, make knowledge easy to retrieve, and build verification habits (humans remain accountable for decisions and published guidance).
Get weekly AI news and advice delivered to your inbox
By subscribing you consent to Generation Digital storing and processing your details in line with our privacy policy. You can read the full policy at gend.co/privacy.
Generation
Digital

UK Office
Generation Digital Ltd
33 Queen St,
London
EC4R 1AP
United Kingdom
Canada Office
Generation Digital Americas Inc
181 Bay St., Suite 1800
Toronto, ON, M5J 2T9
Canada
USA Office
Generation Digital Americas Inc
77 Sands St,
Brooklyn, NY 11201,
United States
EU Office
Generation Digital Software
Elgee Building
Dundalk
A91 X2R3
Ireland
Middle East Office
6994 Alsharq 3890,
An Narjis,
Riyadh 13343,
Saudi Arabia
Company No: 256 9431 77 | Copyright 2026 | Terms and Conditions | Privacy Policy
Generation
Digital

UK Office
Generation Digital Ltd
33 Queen St,
London
EC4R 1AP
United Kingdom
Canada Office
Generation Digital Americas Inc
181 Bay St., Suite 1800
Toronto, ON, M5J 2T9
Canada
USA Office
Generation Digital Americas Inc
77 Sands St,
Brooklyn, NY 11201,
United States
EU Office
Generation Digital Software
Elgee Building
Dundalk
A91 X2R3
Ireland
Middle East Office
6994 Alsharq 3890,
An Narjis,
Riyadh 13343,
Saudi Arabia








