Siân Harrington offers a vital AI reality check for HR leaders, exploring how generative AI is reshaping work, trust and leadership – and why HR must lead the way
The AI wave has hit the workplace like a tidal surge, reshaping how we think, work and lead. For those of us in the HR and people space the stakes are high. At The People Space we've long championed technology that enables more human-centric work. But with generative AI rapidly moving from pilot to production, now is the moment to pause and ask: is AI really making work better? Or are we repeating the mistakes of the dot-com bubble?
For years I’ve described AI as a tool. Something we can use to enhance productivity, creativity and employee experience. But that framing is starting to feel inadequate. Increasingly, I’m drawn to a more profound interpretation, one that aligns with Stephen Klein’s recent call to action. Klein, CEO of Curiouser.AI, urges CEOs to stop treating AI as a project and instead see it as a new layer of organisational infrastructure. "GenAI is not a tool," he writes. "It’s a new layer of organisational infrastructure, reshaping how your company thinks and builds trust."
He’s right. And if that’s true, it changes everything for HR.
Beyond the pilot phase: Where HR needs to lead
The reality is that generative AI is now being embedded across organisations, often without adequate involvement from the people function. Deloitte's State of Generative AI in the Enterprise survey last year revealed that 79% of respondents expected GenAI to drive substantial organisational transformation within three years. But only 25% of leaders believed their organisations were highly prepared to address governance and risk issues related to Gen AI adoption.
It’s end of year report found the majority of respondents (55%–70%, depending on the challenge) believe their organisations will need at least 12 months to resolve adoption challenges such as governance, training, talent, building trust and addressing data issues.
HR should be at the centre of this transformation, not as an implementer of AI tools but as a strategic partner shaping how AI is used to build capability, preserve trust and reimagine work. If AI is the new operating layer then HR must define the principles on which that layer is built.
The Workday lawsuit: A warning shot
One only needs to look at the class-action lawsuit against Workday to understand what’s at stake. Derek Mobley, a Black man over 40 who self-identifies with anxiety and depression, alleges that Workday’s AI-powered screening tools led to more than 100 job rejections, violating civil rights and disability laws. On 16 May 2025 the case was greenlit as a nationwide class action in the US. It could set a precedent for how liability is assigned when AI systems result in discriminatory hiring outcomes.
Mobley alleges he was repeatedly rejected for roles, often without being invited to interview, despite holding nearly a decade of experience across finance, IT and customer service. In one instance detailed in court documents he applied for a position at 12:55 am and received a rejection less than an hour later, at 1:50 am.
Whether or not the claims are upheld the message is that AI systems in HR are not neutral. They carry the biases of their training data and the blind spots of their creators. Deploying them without scrutiny risks real harm to people and real consequences for employers.
HR leaders have a responsibility to ensure that any technology used in talent processes is fair, transparent and explainable. That includes demanding audits of AI systems, creating cross-functional ethics panels and putting employee safeguards in place.
Moderna’s merger: A structural signal
Contrast this with Moderna, the biotech company best known for its COVID-19 vaccine. Last year Moderna made headlines by merging its HR and IT functions into a new division led by chief people and digital officer Tracey Franklin. The rationale? AI was blurring the lines between digital infrastructure and people operations.
Moderna is now developing thousands of custom AI agents in partnership with OpenAI, including those supporting learning and development, employee communications and workflow automation.
According to Franklin Moderna is redesigning teams across the business by asking a fundamental question: what work is best done by people and what can be automated through technology? The company’s partnership with OpenAI is central to this process, enabling both augmentation and automation at scale. Roles are being created, eliminated and reimagined as a result. It’s not just about improving efficiency but about rethinking the very architecture of work. By bringing HR and IT together the company is ensuring that its AI strategy is people-aware from the start.
It’s a model worth watching. Not every company needs to restructure but every HR team needs a seat at the AI table.
Beyond the hype: A more grounded view of AI
We’ve all seen the breathless headlines: “AI will replace 300 million jobs.” "AI can outperform humans in 80% of tasks." They grab attention but they often distort reality.
To truly understand what AI means for work we need to get clear on what it actually is. What we call “AI” today, including generative tools like ChatGPT, Claude and Gemini, are not intelligent agents in the human sense. They don’t think, reason or decide. They don’t understand goals, context or consequence.
Instead, they are powerful statistical models that generate outputs by estimating the most probable next item in a sequence based on patterns in their training data – basically what they have seen before. In other words, they complete patterns. They don’t predict outcomes in the human sense. And they certainly don’t act with intent.
This distinction matters deeply in the workplace, where context, judgment and accountability can’t be reduced to probability.
According to a new working paper from the International Labour Organization (ILO), published in May 2025, 24% of all employment worldwide sits within occupations now considered exposed to GenAI to some degree. Just 3.3% of jobs fall into the highest exposure category, those where most tasks could feasibly be handled by GenAI without human input.
The ILO also highlights real barriers to adoption, from low digital skills among both workers and managers to organisational cultures still resistant to automation.
A growing body of research reinforces this need for realism. A large-scale Danish study examining the real-world labour effects of generative AI found that while adoption is widespread, often employer-driven and accompanied by training, the actual impact on wages or working hours across 11 occupations highly exposed to automation was minimal. Productivity gains were modest (just 2.8–3%) and in 8.4% of cases AI created new tasks, such as reviewing machine-generated output, rather than removing work.
Meanwhile, much of the buzz around so-called ‘agentic AI’ – systems of multiple AI agents working together – is still theoretical according to a paper by Cornell University. While these architectures promise collaboration and complexity-handling at scale, they introduce major challenges: coordination errors, security risks and system instability. The projected market growth is exponential but real-world deployment remains rare.
For HR professionals the message is that we cannot adopt AI blindly. It starts with investing in employee literacy, embedding ethical guardrails and ensuring that any experimentation is transparent, inclusive and intentional. That means clearly explaining where AI is used, involving employees in the design of AI-enabled systems and creating safe spaces for feedback and course correction.
The human role in an AI world
There’s no doubt that AI can deliver real value, particularly in automating routine, admin-heavy tasks and surfacing useful patterns in large datasets. But it’s essential we use it to augment, not override, human judgment.
Generative AI doesn’t understand the world. It produces statistically plausible outputs – fluent, fast and sometimes useful – but without context or comprehension. It can simulate empathy in language but it doesn’t feel. It can offer recommendations but it doesn’t reason.
And that distinction matters deeply in HR. When you’re deciding who to hire, promote or redeploy, data can support the process but context is critical. When coaching a leader or supporting a team in conflict the subtleties of tone, trust and history simply can’t be outsourced to an algorithm.
This is where HR’s value will remain uniquely human: in the spaces where understanding matters more than prediction.
From tool to infrastructure: What this means for HR
It’s tempting to frame AI purely in terms of efficiency gains – and yes, when thoughtfully applied, it can reduce admin, speed up processes and create breathing space for deeper work. That’s valuable. But the real opportunity is bigger.
AI isn’t just another tool in the HR tech stack. As Klein argues it’s becoming a foundational layer in how organisations operate and make decisions. That shift demands a rethinking of roles, responsibilities and relationships, not a spreadsheet exercise to cut headcount.
Using AI to cut costs by removing people may deliver short-term gains. But using it to free people up – to focus on judgment, creativity and culture – is how organisations build resilience and long-term value.
Companies that have framed AI purely as a cost-cutting exercise are learning this the hard way. Klarna, for example, replaced 700 customer service staff with AI to cut costs during a downturn only to backtrack months later when service quality dipped and customer dissatisfaction grew. “It’s critical that customers know there will always be a human if you want,” CEO Sebastian Siemiatkowski later admitted.
IBM experienced a similar backlash. After laying off thousands of HR employees and replacing them with its AI platform AskHR, the company found that tasks requiring human nuance and judgment were falling through the cracks. The result? Rehiring.
These examples show that AI-led transformation without people-led strategy risks unintended consequences and short-lived savings.
As Stephen Klein reminds us the real question isn’t whether a company is “using AI” but what is the work we’re hiring this technology to do?
Right now, too many companies are launching AI pilots with the wrong objectives: speed, savings, optics. The result? Failed integrations, burned budgets and disappointed teams. The organisations that thrive will be the ones whose leaders, including HR, ask better questions: Where do we create value? What do we want to stand for? How can AI help us serve that mission better, not just faster?
Ultimately AI exposes more than our digital maturity. It reveals our leadership maturity.
So what should HR do now?
Here’s what we at The People Space believe HR leaders should focus on right now:
- Claim your strategic seat. If AI is being discussed at your board or exec table and HR isn’t there, that’s a red flag. Make the case for your inclusion. This is about workforce impact, culture, capability and compliance – all HR domains.
- Create AI principles. Don’t wait for a policy. Convene a cross-functional group to develop your own organisational AI principles. Focus on transparency, fairness, explainability and inclusion.
- Upskill – now. There are new capabilities HR needs to lead in this era. Invest in your own development and support your team to do the same.
- Be the conscience. If AI deployment risks harming employees, degrading trust or undermining culture, say so. Raise the flag. Offer alternatives. Be brave.
- Start small, learn fast. Pilot AI tools in low-risk areas, with clear success criteria and feedback loops. Share what works and what doesn’t. Normalise experimentation, not perfection.
Our promise: No hype. Just work.
Since we launched in 2017 The People Space has been a space for thoughtful, future-focused perspectives on work and technology. We’ve championed the promise of AI while staying grounded in the lived experience of people leaders.
In the months ahead we’ll continue to cover AI at work but we make this promise to you: no hype, no doom. Just smart and, where possible, evidence-based insight on what this technology means for real organisations and real people.
The future of work isn’t a product but a practice. And we believe HR has a critical role in shaping it for the better.