Is AI really going to take your job?

The latest evidence does not point to a labour market being rapidly dismantled by AI but to something more uneven: a widening gap between technical capability and workplace reality, slower entry into some white-collar roles and a growing need to redesign jobs for a human-machine workplace rather than simply speculate about who will be replaced
Published on
Image
tall career ladder with AI on the bottom rungs and a young man trying to climb up

What HR leaders should know

  • AI capability is still running ahead of real workplace adoption
  • Current evidence does not show a clear rise in unemployment among highly exposed workers
  • There are early signs that hiring has slowed in AI-exposed roles, especially for workers aged 22 to 25
  • Some employers are now building AI usage into performance expectations and hiring criteria
  • Exposure does not equal vulnerability because workers differ in their adaptive capacity
  • Job redesign is becoming a more useful lens than simple job replacement

The question of whether artificial intelligence will take jobs has become a shorthand for a much broader unease about the future of work. It is emotive, highly clickable and easy to grasp. It is also becoming less helpful as a way of understanding what is actually happening.

This question has sharpened again in recent weeks as companies continue to restructure around AI investment, with Meta confirming fresh layoffs across several teams, while other business leaders and commentators have renewed warnings about white-collar disruption. Yet the strongest new evidence suggests that the real changes are not showing up first through dramatic spikes in unemployment. They are appearing in quieter, more structural ways: in who gets hired, in how work is being redistributed inside roles, in what employers now expect from candidates and in how organisations are trying to extract value from technologies that remain far less embedded in practice than the hype suggests.

Work is rarely transformed in one clean sweep. More often, change begins at the edges of organisations and professions before it becomes visible in headline labour market data. HR and people leaders should pay close attention to that distinction.

A future fit HR function cannot afford to rely on a crude divide between jobs that look safe and jobs that look exposed. It needs to understand where AI is materially changing the economics of work, where it is mainly altering managerial expectations and where it is exposing weaknesses in talent pipelines, job design and organisational capability.

That uncertainty is visible in UK business sentiment too. Software company monday.com’s latest UK research found that 78% of directors do not expect AI to reduce employee headcount next year, while nearly a third expect to hire more because of new AI-related specialist roles. That does not settle the question but it does underline how far we still are from a single, settled story about AI and employment.

What does the latest evidence say about AI and jobs?

In a new paper, Labor market impacts of AI: A new measure and early evidenceAI company Anthropic is more restrained than much of the public commentary, though the report still needs to be read in context. It draws on the company’s own usage data, so it is informative without being fully detached from the perspective of a business with a stake in how AI’s economic effects are understood. Even so, it is a useful contribution because it helps show where the earliest signs of strain may be emerging and where researchers should keep looking.

Rather than focusing only on what large language models might be able to do in theory, the researchers introduce a measure of “observed exposure” that compares theoretical capability with what people are using Claude to do in real workplace settings. Their headline finding is that actual usage remains well below what the technology could theoretically support, meaning AI is still “far from reaching its theoretical capability” in the labour market.

Image
Mapping of individual jobs taken by AI versus potential

This is important because much of the public debate still treats capability and deployment as though they were the same thing. Overestimating the speed of technological change can distort both public anxiety and business decision-making.

According to Anthropic, in occupational categories such as computer and maths, business and finance and legal work, the blue area showing theoretical coverage stretches far beyond the red area showing observed usage. The implication is that there is still a considerable distance between what models may be able to do and what organisations have successfully translated into live workflows.

The labour market findings are similarly cautious. Anthropic reports no systematic increase in unemployment for highly exposed workers since late 2022. The top 25% of workers most exposed to AI automation have similar trends in unemployment rates to workers with no exposure at all. In other words, if the question is whether the data currently shows a clean, measurable wave of AI-driven unemployment, the answer is no.

Yet that is only half the story. The same report also identifies suggestive evidence that hiring has slowed for younger workers entering highly exposed occupations. Specifically, it finds a 14% drop in the job finding rate for workers aged 22 to 25 entering these roles compared with the 2022 baseline, while also stressing that the estimate is only just statistically significant and may have alternative explanations. 

This is a more subtle signal but also a more revealing one. It suggests that the earliest effects of AI may be appearing at the point of entry rather than through a wave of people being pushed out.

“The pressure is already showing up, just not where people expected” – Conor Grennan

Conor Grennan, CEO and Founder of AI consulting company AI Mindset and former chief AI architect at NYU Stern School of Business, describes this as a “slow drip” rather than a sudden rupture. This aligns both with the data and with what many employers appear to be doing in practice: holding headcount more tightly, expecting more from smaller teams and reassessing which kinds of junior work still need to be done by humans.

Where is the pressure showing up first?

If the earliest labour market signal is weaker hiring rather than sharper unemployment the implications are significant. A profession can appear stable from the outside while becoming materially harder to enter. Existing employees remain in place but the number of early-career opportunities narrows. The ladder is still there, though fewer people are being allowed onto it.

There are signs of a similar pattern in the UK. Survey data from Helm, Britain’s largest entrepreneur network, found that 58% of scale-up founders are already delaying or reducing new hires because of increased AI use while a third expect AI adoption to lead to job cuts within the next 12 months. More strikingly 93% said they do not believe the UK workforce is adequately prepared for widespread AI adoption. These are not labour market statistics but they are a useful indication of business sentiment in a part of the economy where growth, hiring and experimentation tend to move quickly.

This should concern anyone thinking seriously about long-term workforce health. Entry-level roles have never mattered simply because they fill immediate capacity gaps. They are where people learn judgement, context, professional norms and the awkward realities of organisational life. If employers strip out too much of that work before they have built credible alternative development routes they risk weakening the future talent base of their own functions and sectors.

The question, then, is whether organisations are reducing hiring because AI is already removing parts of the work or because leaders believe it soon will and are making staffing decisions on that basis. A related concern is whether some of those decisions are being made against business cases built around theoretical AI coverage rather than proven organisational gains. Those questions go to the heart of whether AI is changing the labour market through realised productivity or through expectation, anticipation and financial pressure.

What happens when employers start measuring AI use?

Wall Street Journal report this year suggested that some major technology companies are moving beyond encouraging AI use and are starting to formalise it, track it and, in some cases, fold it into performance systems. Candidates, too, are increasingly being assessed for AI fluency

This shows how quickly AI is being absorbed into the logic of employability and performance, even while the broader evidence on labour market effects remains mixed. Employers may not yet be able to prove that AI has transformed work, but many are already behaving as though it has.

Technology consultant and author of Unicorns, Hype and Bubbles Jeffrey Funk has questioned whether this approach confuses visible activity with genuine value. “If AIis so good, why do they have to force employees to use AI?” he asked in response to the story. 

His argument is that measuring AI use at the level of the individual may say little about whether the work is better, more reliable or more useful.

For HR leaders the warning is clear. If organisations reward visible tool use without a mature framework for quality, judgement and downstream impact they may end up encouraging gaming, superficial compliance and a flood of work that looks efficient on the surface but creates more friction elsewhere.

Which workers are most exposed and which are most vulnerable?

Earlier this year nonprofit public policy organisation Brookings published a paper looking not simply at who is exposed to AI but at who is likely to cope if disruption occurs. That is a more useful lens than exposure alone.

Its analysis finds that, of the 37.1 million US workers in the top quartile of occupational AI exposure, 26.5 million also have above-median adaptive capacity. In practice this means many workers in highly exposed roles are still relatively well placed to navigate a job transition if they need to. 

At the same time Brookings identifies 6.1 million workers who face both high exposure and low adaptive capacity. These workers are concentrated mainly in clerical and administrative roles and 86% are women. That is a far more useful insight than vague warnings about white-collar disruption because it tells us something about who may struggle, why they may struggle and where targeted support may be needed most.

The report shows that exposure alone does not determine outcomes. Savings, age, skill transferability and geography all matter. A worker in a large labour market with portable skills and some financial buffer is not in the same position as someone in a narrower market with fewer alternatives. It is not only that different jobs face different levels of exposure. Workers inside those jobs also have very different capacities to cope with change.

Is the real issue job loss or job redesign?

A useful way to understand this comes from recent research by Luis Garicano at the London School of Economics, with Jin Li and Yanhui Wu at the University of Hong Kong. Their argument is that labour markets price jobs, not tasks.

That sounds obvious but much of the AI and jobs debate still misses the distinction. It often assumes that if AI can perform more tasks in an occupation that occupation will inevitably shrink. Yet jobs are not tidy lists of activities. They are bundles of tasks, relationships, judgement and accountability.

Garicano uses radiology to make the point. Reading scans is a task. Being a radiologist is a job. Radiologists also handle edge cases, speak to clinicians, train junior staff and sign off decisions that other people act on. If AI improves the image-reading part that does not automatically mean it has replaced the profession. The real question is whether that task can be pulled out of the wider role without damaging the value of the role itself.

The researchers describe this distinction as the difference between “weak bundles” and “strong bundles”. In weak bundles, tasks can be split apart more easily, so the human role may narrow and employment pressure can grow. In strong bundles, separating the tasks destroys too much value, which means AI is more likely to support the role than remove it.

This brings the conversation much closer to job reinvention. Which jobs can be unpicked without losing too much of what makes them effective? Some roles will prove more resilient because their value lies in the combination of judgement, communication, context and responsibility. Others may be redesigned so that the human role becomes narrower. Neither outcome fits neatly into slogans about jobs being ‘taken’ but both matter to workforce design.

Checklist: Questions to ask of every AI layoff claim

  1. Attribution: Is AI the primary cause in filings or one factor among others?
  2. Scope & timing: Are these immediate cuts or multi-year projections?
  3. Process change: Have workflows, data and governance been redesigned?
  4. Redeployment: Were new roles created in AI oversight, evaluation or data quality?
  5. Job quality: What’s the impact on contracts, training and protections?

What should future fit HR do with this now?

The AI jobs debate needs to become more precise. The best available evidence does not support a simple story of immediate, large-scale AI-driven unemployment. At the same time it does point to meaningful pressure in exposed roles, especially at the point of entry, and to a widening gap between technical promise and organisational readiness.

This leaves HR with a more demanding task than either the enthusiasts or the alarmists tend to admit. Leaders need to understand where AI is being used, where it is being enforced, where it is genuinely adding value and where it is merely intensifying performance pressure without improving outcomes.

They also need to think much more carefully about junior pipelines, because reduced hiring into exposed roles may create talent shortages and developmental gaps later on. They need to identify groups with lower adaptive capacity rather than assume that everyone in knowledge work has the same ability to absorb transition. And they need to revisit job design with more discipline, asking which roles are strongly bundled, which are weakly bundled and how work can be redesigned without stripping out the conditions in which people develop judgement.

This is why workforce planning can no longer be treated as a periodic exercise. As Thea Fineren, chief people officer at tech company Advania, puts it, HR leaders need to “anticipate transformation before it forces your hand” by mapping roles exposed to automation, investing in retraining and creating credible pathways for career mobility.

This is where a human-machine workplace becomes a serious idea rather than a fashionable phrase. AI cannot simply be dropped into existing workflows with the hope that value will follow. HR leaders need to rethink work so that human strengths, machine capability and organisational value are aligned more intelligently.

FAQ

Is AI causing mass job losses right now?

The latest Anthropic data does not show a systematic rise in unemployment for highly exposed workers since late 2022. That said, the absence of a clear unemployment effect does not mean labour market change is absent.

Where is the earliest sign of labour market impact?

The clearest early signal in the Anthropic research is slower hiring into exposed occupations for workers aged 22 to 25. The report estimates a 14% decline in the job finding rate for this group relative to the 2022 baseline. 

Which workers may be most vulnerable to AI disruption?

Brookings Institution suggests the highest-risk group is not simply the most exposed group. It is workers who combine high exposure with low adaptive capacity, especially those in clerical and administrative roles with less financial buffer, fewer local opportunities and narrower transferable skills. 

What does this mean for HR leaders?

HR leaders need to focus on job reinvention, entry-level pathways, capability development and the quality of human-machine collaboration. The challenge is not only whether AI can do tasks. It is how organisations redesign work around that reality.

About the author

Sian Harrington editorial director The People Space
Sian Harrington

Business journalist and editor specialising in HR, leadership and the future of work. Co-founder and editorial director The People Space

View Full Bio

Related articles