Anthropic Measured AI at Work. The Results Are Not What You Think Anthropic Measured AI at Work. The Results Are Not What You Think

A March 2026 Claude labor market report from Anthropic offers one of the clearest pictures yet of AI at work. The surprise: the early AI impact on jobs looks more like slower hiring, uneven adoption, and workflow redesign than a sudden wave of layoffs.

On March 5, 2026, Anthropic, the company behind Claude, published a rare kind of AI labor market study. Instead of asking what AI may do in a few years, it used Claude usage data to ask what workers are doing with AI right now. It then matched that data with the U.S. O*NET job database and earlier research on which tasks large language models can handle.

The main result is easy to miss because the headline numbers sound dramatic. AI can touch a lot of white-collar work. But Anthropic found no clear, economy-wide rise in unemployment for the most exposed workers since late 2022. Younger workers may be finding it harder to enter some exposed jobs. The instant robot-takeover headline will have to wait.

Why the Anthropic AI study matters

Anthropic calls its new metric “observed exposure.” In simple terms, it asks which tasks AI could theoretically speed up and which of them are already being done with AI in professional settings. The method gives more weight to work-related and automated use. If Claude fully does a task, that counts more than using the tool as a helper.
“AI can do this” and “companies are using AI for this at scale” are very different claims. Anthropic’s earlier Economic Index had already hinted at this split, with 57% of Claude use looking more like augmentation and 43% like automation.

Older exposure models were mostly about possibility, not adoption. OpenAI’s 2023 framework said roughly 80% of U.S. workers could see at least 10% of their tasks affected by LLMs, and about 19% could see half their tasks affected. The ILO’s 2025 global index also found high exposure in clerical jobs, while warning that job transformation is more likely than simple job destruction. Anthropic’s March 2026 report moves the conversation from theory to real workplace behavior.

AI at work: the gap between can and does

Here is the big surprise. Anthropic found that 97% of the tasks it observed in Claude data were in categories that earlier research had already marked as theoretically feasible for LLMs.

But actual workplace coverage is still far below the technical ceiling. In computer and math jobs, for example, theoretical coverage is 94%, while Claude’s observed coverage is only 33%.

Why is the gap still so wide? Anthropic points to the usual frictions of real work: legal limits, software requirements, human review, trust, and simple habit. Yale’s Budget Lab makes a similar point: actual AI use varies a lot even among occupations with similar theoretical exposure, and missing tasks in Anthropic’s data make aggregation tricky.

AI jobs under pressure right now

Right now, the most exposed roles look very digital, very screen-based, and very text-heavy.

  • Computer programmers - 74.5%
  • Customer service representatives - 70.1%
  • Data entry keyers - 67.1%
  • Medical record specialists - 66.7%
  • Market research analysts and marketing specialists - 64.8%

Financial and investment analysts, software QA testers, information security analysts, and computer support specialists also rank high. At the other end, about 30% of workers had zero observed coverage in Anthropic’s sample, including cooks, lifeguards, bartenders, and mechanics.

The pressure is showing up first in repeatable knowledge work: coding, customer service, data handling, analysis, documentation, and record processing. We recently covered a Matplotlib-related incident that showed a darker edge of this shift: an AI-linked coding workflow spilled over into a public attack on a maintainer after a rejected pull request. That is an outlier, not the norm.

Still, it is a useful reminder that AI can reshape office work through trust, review culture, and reputational risk as well as through speed and cost. In a broad global sense, clerical jobs still matter a lot in exposure models. But Anthropic’s real-use data says the present-day AI job market is already hitting high-skill office work in visible ways.

The AI job market: fewer hires before more layoffs?

For workers in the top quartile of AI exposure, unemployment has not risen in a clear or statistically meaningful way since ChatGPT’s release. That does not prove AI is harmless. It does suggest that, so far, the labor market effect is not showing up as mass layoffs.

Hiring looks more fragile. Anthropic found that workers aged 22 to 25 were about half a percentage point less likely per month to start jobs in highly exposed occupations after ChatGPT’s release. That is a 14% drop in the job-finding rate versus 2022, and the result is only barely statistically significant. Still, it points in the same direction as a Stanford study using ADP payroll data, which found a 16% relative decline in employment for early-career workers in the most AI-exposed occupations.

Other signals point the same way. Federal Reserve Governor Michael Barr said in February that the literature still shows no substantial effect on aggregate employment or unemployment, but it may already be hurting some younger workers at the start of their careers.

A 2025 New York Fed survey found that 40% of service firms and 26% of manufacturers were using AI. Only 1% of service firms reported AI layoffs in the prior six months, but 12% said they had hired fewer workers because of AI, and just over a third said they were retraining workers instead.

Why white-collar workers show up first

Another surprising result: the most exposed workers are not the people many readers picture first. Anthropic found that the high-exposure group is slightly older, more likely to be female, more educated, and much better paid. On average, they earn 47% more than the unexposed group. Graduate-degree holders make up 17.4% of the most exposed group, versus 4.5% of the unexposed group.

Generative AI fits best, for now, into work built around language, information, forms, code, and decisions on a screen. If your daily tasks are digital and repeatable, AI can enter the workflow faster. If your job depends on physical presence, manual skill, or real-world responsibility, adoption moves more slowly.

What the Claude labor market report means for you

This report needs one clear warning label. Anthropic measured Claude, not the whole economy. The company itself says the framework will not capture every way AI could reshape work. So observed exposure works best as an early warning system, not a final verdict.

Even with that limit, the practical takeaway is strong. If you work in coding, customer support, research, finance, or documentation, learning to use AI well is now a career skill, not a side hobby. Watch hiring before layoffs. Watch workflows before job titles.

A reasonable thing to watch next is API-based automation, not only chatbot use. Anthropic’s later March report found that coding tasks kept moving from Claude.ai to API workflows, the top 10 Claude.ai tasks had dropped from 24% to 19% of traffic, and more experienced users had a 10% higher success rate. That suggests a realistic next phase for AI at work: the tools improve, companies learn how to use them, and the gap between “possible” and “normal” slowly closes.

Conclusion

The cleanest reading of the Anthropic AI study is this: AI and employment are changing, but the change is uneven, slower than the hype, and easier to spot in entry-level hiring than in unemployment data. That is less dramatic than the usual future-of-work story, and more useful for readers who want to know what to watch next.

Author's other posts

Apple Maps Ads Could Reshape Local Search
Article
Apple Maps Ads Could Reshape Local Search
Apple Maps ads could reshape local search for brands and small businesses, changing how users discover places through Maps, Siri, and Apple Business.
Reliable AI Knows When to Say: “This Makes No Sense”
Article
Reliable AI Knows When to Say: “This Makes No Sense”
BullshitBench shows why reliable AI must detect broken premises, not just generate fluent answers. A look at AI reliability, pushback, and false premise detection.
The Browser Becomes the Agent: Why Search Starts to Act
Article
The Browser Becomes the Agent: Why Search Starts to Act
AI search is learning to act inside the browser, not only answer. Here is how browser agents are changing SEO, traffic, privacy, and the future of the open web.
The Collien Fernandes Case and the Rise of Deepfake Abuse
Article
The Collien Fernandes Case and the Rise of Deepfake Abuse
The Collien Fernandes case shows how deepfake abuse, fake nudes, and cloned voices can wreck lives — and why lawmakers are rushing to catch up.