Workday ordered to reveal AI hiring clients: Why it matters for every HR leader

Workday must disclose which employers used its AI hiring tools as US court case advances. What HR leaders need to know about algorithmic hiring risk
Published on
Image
Workday ordered to reveal AI hiring clients Why it matters for every HR leader.png

A federal judge in California has ordered Workday to produce a list of customers using its AI features to score, sort or screen job applicants, escalating scrutiny over the company’s role in alleged algorithmic discrimination. The ruling, issued on 29 July 2025, forms part of an ongoing class action lawsuit brought by Derek Mobley, who claims Workday’s technology discriminated against him on the basis of race, age and disability.

This is a significant development. Not only does it challenge assumptions about how vendors deploy artificial intelligence in recruitment platforms but it also opens the door to legal accountability beyond Workday itself. Every HR leader using algorithmic tools, wherever they are in the world, should be paying attention.

This case raises wider questions about transparency, bias and responsibility in the growing market for AI-powered hiring.

The context: from algorithm to accountability

Mobley’s lawsuit, filed in February 2023, alleges that Workday’s AI systems, used by its customers to filter and rank job applications, led to discriminatory outcomes that breached US employment law. Specifically, the complaint targets Workday’s use of automated recommendation systems that score, sort or screen candidates – systems that have become increasingly common as employers seek to speed up hiring and manage high application volumes.

Earlier this year the case moved a step forward when the US District Court granted preliminary class certification. This allowed the case to represent a broader group of applicants who were subject to similar screening processes using Workday’s AI tools.

At the time,The People Space covered the implications of this development in our article ‘AI in the Workplace: A Reality Check for HR’, noting that it marked a rare moment of judicial intervention into a fast-growing but largely opaque industry.

Now, with this new order, the court has instructed Workday to provide a list of customers who have used specific AI features, including those developed by HiredScore, a company Workday acquired after the lawsuit was filed. This list will form the basis for further discovery and, potentially, for identifying which applicants may have been affected.

What the judge said and why it matters

In the court’s July order, Judge Rita F Lin rejected Workday’s attempts to exclude the HiredScore products (Spotlight and Fetch) from the scope of the case, even though they were acquired post-complaint. Workday had argued that HiredScore was not part of its core recruiting platform and operated on a separate technical stack.

However, the judge found these distinctions irrelevant at this stage, noting that:

  • Workday has since integrated HiredScore into its offering
  • The preliminary class includes applicants from 24 September 2020 onward, covering the period after the acquisition
  • The definition of the class is based on the function of the AI (scoring, sorting, ranking, screening), not the specific product brand.

Workday also claimed there were material differences in the algorithms used by HiredScore versus its original Candidate Skills Match feature. But again, the judge stated that this issue is best addressed later, at the decertification stage. For now, what matters is that applicants were subjected to an AI-driven hiring process under Workday’s umbrella.

Crucially, the court ordered Workday to provide a timeline to deliver a customer list, one that includes any organisation that enabled these AI features during the relevant period. Only in cases where Workday can definitively prove that no scoring or screening occurred may a customer be excluded from the list.

A spotlight on the ecosystem

This order could have wide-reaching implications, not only for Workday but for the broader vendor ecosystem.

If Workday is compelled to disclose which organisations used these AI features, it could expose hundreds, potentially thousands, of employers to legal scrutiny. While this is a US-based case it’s not hard to imagine similar claims arising elsewhere, especially as regulatory frameworks tighten around AI and automated decision-making.

For HR leaders, this raises two immediate questions:

  1. Do you know exactly how your recruitment systems – especially third-party tools – screen applicants?
  2. Can you clearly explain and evidence the fairness and legality of those processes if challenged?

The case is a reminder that procurement is not the same as abdication. Employers remain responsible for the tools they use, even if those tools come embedded in vendor platforms. If a hiring decision is made using algorithmic screening, that organisation and not just the vendor may face legal, reputational and ethical consequences.

Why HR leaders should act now

Regardless of geography or vendor this case signals a shift in expectations around transparency, explainability and due diligence. Here’s why it should be on every HR leader’s radar:

1. Legal pressure on ‘black box’ systems is increasing

Governments and courts are starting to demand greater accountability for automated decision-making. In the EU, the AI Act will impose strict obligations on “high-risk” systems, including many used in recruitment. In New York City, Local Law 144 already requires audits for algorithmic hiring tools. And in the UK, regulators have signalled a growing interest in algorithmic fairness, even within existing equality and data protection laws.

The Workday case shows how individual applicants can leverage class actions to force transparency, something that may become more common as public awareness grows.

2. HR remains responsible, even when vendors provide the tech

There’s often an assumption that liability sits with the tech provider. But as this case illustrates, courts are likely to take a broader view, especially when tools are deeply embedded in employment processes. HR and recruitment teams must understand not only what their systems do,but how they do it. That includes asking vendors hard questions about training data, algorithmic design and audit processes.

3. Procurement practices need to reflect ethical and legal risks

Many organisations lack a systematic approach to assessing the risks of AI-based hiring tools. Vendor assessments tend to focus on features and pricing, not on explainability or fairness. This case highlights the importance of building due diligence into procurement, contract terms and ongoing system monitoring.

4. Reputation damage can spread beyond the courtroom

Even if an organisation is not legally liable, being publicly associated with biased or opaque systems can erode trust among candidates, employees and stakeholders. In the era of AI-enhanced work, trust is a core asset. HR leaders must ensure their use of technology aligns with organisational values and DEI commitments.

What to do next

For those already using or considering AI-powered recruitment tools, now is the time to:

  • Audit existing systems: Identify where algorithmic screening is used and assess how decisions are made.
  • Demand transparency from vendors: Ask for detailed documentation, audit results and evidence of fairness testing.
  • Review consent and communication: Ensure candidates are informed about how their data is used and decisions are made.
  • Engage legal and compliance teams: Map relevant legal obligations across jurisdictions, not just the country you operate in.
  • Invest in capability: Train HR teams to understand the basics of algorithmic systems, bias mitigation and ethical AI use.

Looking ahead

The Workday case is unlikely to be the last of its kind. As AI adoption accelerates, legal and ethical scrutiny will only increase. HR leaders, especially those at the helm of digital transformation, must be prepared to lead on responsible AI use, not wait for the courts or regulators to force their hand.

Transparency is not optional and neither is understanding.

Whether you’re a Workday customer or not, this ruling is a reminder that AI hiring systems are not just plug-and-play features. They are consequential technologies, with real impacts on real people. Now is the time to get ahead of the curve.

At The People Space we help HR leaders navigate these complex issues through our Future-Fit HR workshops, including practical sessions on AI adoption, trust and risk in HR. If you're looking to build capability, assess readiness or shape a responsible AI strategy our workshops offer grounded, evidence-based support tailored to your context. Contact Siân directly if you would like more information. 

About the author

Sian Harrington editorial director The People Space
Sian Harrington

Award-winning business journalist and editor. Co-founder The People Space

View Full Bio

Related articles