Eightfold lawsuit reveals the second accountability gap in AI hiring

A new class action against Eightfold AI shifts legal scrutiny of AI hiring away from bias alone and towards transparency, consent and access to data. Together with the ongoing case involving Workday, it reveals two emerging accountability gaps that every HR leader needs to understand
Published on
Image
A young man sitting at laptop with a score displayed above him

Summary

Recent US class actions show that AI-powered hiring systems are being challenged on two fronts: unfair outcomes and invisible processes. While the Workday lawsuit focuses on alleged discriminatory results, the Eightfold case centres on whether job applicants are entitled to know, access and challenge AI-generated scores used in hiring decisions. Together the cases signal rising legal expectations around transparency, accountability and governance in AI hiring.

AI hiring has two accountability gaps. One concerns unfair outcomes. The other concerns invisible processes assures both are now under legal scrutiny.

A new class action puts AI hiring under the microscope again

A new class action lawsuit filed in California is challenging how AI-powered hiring platforms operate behind the scenes and what job applicants are entitled to know about how they are assessed.

The case targets Eightfold AI, a widely used recruitment technology provider whose software is embedded in the hiring processes of many large employers. The complaint alleges that Eightfold collects, assembles and evaluates extensive personal data about job applicants using artificial intelligence, produces scores and rankings that shape hiring decisions, and does so without meeting long-standing legal requirements designed to protect individuals applying for work.

At the heart of the case is the question of when an AI system profiles and scores candidates for employment, does that constitute a regulated employment report and if so what rights do candidates have to see, understand and challenge what is being said about them?

This is not the first time AI hiring tools have found themselves in court. But this case takes the scrutiny in a different direction and it matters for every HR leader using algorithmic screening or matching tools, regardless of geography or vendor.

What the Eightfold case is actually about

The Eightfold lawsuit is brought by two job applicants on behalf of a proposed nationwide class. They argue that Eightfold’s technology functions as a third-party assessment system that materially influences who progresses in recruitment and who is screened out.

According to the complaint, when candidates apply for roles at organisations using Eightfold the platform goes beyond the information submitted in the application itself. It gathers data from multiple sources, including public professional profiles, inferred skills, career trajectories and other signals, and processes this information through proprietary AI models. The output is a match score, typically on a scale from zero to five, ranking candidates by their predicted suitability or likelihood of success.

Employers then use those rankings to decide which applications to review and which to discard. In many cases, lower-ranked candidates are filtered out automatically, without a human ever reviewing their application.

The plaintiffs say candidates are not clearly informed that this scoring is taking place, are not shown the data used to generate their scores and are not given an opportunity to challenge errors or assumptions before hiring decisions are made.

The legal claim is not that the technology is biased. It is that the process is opaque and that opacity may itself be unlawful.

This is a watershed moment for the hiring industry because it puts a spotlight on a simple principle: if a candidate didn’t knowingly provide the data, it shouldn’t be used to judge them - Barb Hyman, founder Sapia.ai

What is a consumer report for employment purposes?

Under US law a consumer report is not limited to credit checks. It includes any third-party report that assembles or evaluates information about an individual’s character, reputation, personal characteristics or mode of living and is used to make employment decisions.

When such reports are used individuals are entitled to clear disclosure, informed consent, access to the information held about them and a meaningful opportunity to correct inaccuracies.

The Eightfold case argues that AI-generated candidate scores and profiles fall within this definition.

Why this is legally different from the Workday case

To understand why this matters, it helps to distinguish this case from an earlier and widely covered lawsuit involving Workday.

The Workday case, brought by a job applicant alleging discrimination, focuses on outcomes. It asks whether algorithmic screening systems used by employers led to discriminatory results based on protected characteristics such as race, age or disability and whether a vendor can be held legally accountable alongside employers for those outcomes.

That case sits squarely within employment and civil rights law. Its central concern is fairness and disparate impact.

The Eightfold case takes a different route. It focuses on process rather than outcome. The question is not whether the AI discriminated but whether it operated in a way that denied candidates basic rights to transparency, consent and access to information.

Put simply, the Workday litigation asks whether AI hiring tools produce unlawful results. The Eightfold litigation asks whether AI hiring tools are being used in ways that applicants are never properly told about. Both matter as they expose different vulnerabilities in the same system.

AI hiring now has two accountability gaps

Taken together these cases point to a broader shift in how AI-enabled hiring is being examined. There are now two distinct accountability gaps that courts, regulators and candidates are beginning to probe.

The first concerns unfair outcomes.

This is the gap highlighted by the Workday case. It centres on whether algorithmic systems disadvantage certain groups, whether bias is embedded in training data or design choices and who bears responsibility when automated screening leads to discriminatory patterns. This is where most public debate about AI hiring has focused to date.

The second concerns invisible processes.

This is the gap exposed by the Eightfold case. It centres on whether candidates even know they are being assessed by third-party AI systems, what data those systems use, what inferences they draw and how those inferences shape employment decisions. It asks whether people have any meaningful visibility or agency in processes that can shape their careers.

Both gaps challenge assumptions about how AI hiring tools operate and who they serve.

The two accountability gaps in AI hiring

Accountability gap one: unfair outcomes
Concerned with bias, discrimination and disparate impact. Anchored in employment and equality law. Exemplified by the Workday litigation.

Accountability gap two: invisible processes
Concerned with transparency, consent and access to information. Anchored in consumer reporting and data protection law. Exemplified by the Eightfold litigation.

Why invisible processes are now a serious HR risk

For many organisations algorithmic screening has been treated as a technical layer that sits somewhere between HR systems and recruitment operations. Vendors promise efficiency, scale and objectivity. Procurement focuses on features, integration and cost. The mechanics of how candidates are profiled often remain abstract. It's this abstraction that is becoming harder to defend.

The Eightfold case underscores a growing legal and ethical expectation that individuals should not be subjected to consequential automated assessments without understanding how those assessments work and without having any route to challenge them.

This expectation is not limited to the US. In the EU recruitment systems are classified as high-risk under the AI Act, bringing with them obligations around transparency, documentation and human oversight. In the UK regulators have signalled that existing equality and data protection frameworks already apply to algorithmic decision-making in employment. New York City’s Local Law 144 requires audits of automated employment decision tools and public disclosure of their use.

What this means for HR leaders using AI hiring tools

The Eightfold case should prompt HR leaders to ask a different set of questions from those raised by earlier discrimination-focused lawsuits. It's no longer enough to ask whether a system has been tested for bias. Leaders also need to understand how visible the system is to candidates and how defensible its use would be if challenged.

Key questions include:

  • Do candidates know that third-party AI systems are assessing them?
    Could you clearly explain, in plain language, what those systems do and how they influence decisions?
  • Do you have access to the outputs?
    If a candidate asked to see the data or scores used to assess them, could you provide it or even obtain it from your vendor?
  • What rights do candidates have in practice?
    Is there any mechanism for review or correction when AI-generated assessments are wrong, incomplete or misleading?
  • Where does responsibility sit?
    If a vendor positions itself as providing insights rather than decisions, does that reflect how the tool is actually used in your hiring process?

These are governance questions and sit squarely within HR leadership’s remit. As Barb Hyman, founder of Sapia. ai, says: "Frankly, we saw it coming. There are too many people building a ‘mosh pit’ of data and weaponising it. This means that all investment has gone into efficiencies, focussed on streamlining the recruitment process rather than improving the candidate experience. We are now seeing the pitfalls of this strategy."

Procurement is not the same as abdication

One of the  lessons running through both the Workday and Eightfold cases is that buying a system does not transfer responsibility for how it is used.

Courts are increasingly willing to look beyond marketing language to examine function. If an AI tool scores, ranks or filters candidates in ways that shape employment outcomes it becomes part of the decision-making infrastructure, regardless of whether a human formally approves the final hire.

That has implications for procurement, contracts and oversight. HR leaders need visibility into how tools operate, not just what they promise to deliver. They need clarity on data sources, model logic, retention practices and candidate communication. They also need to be able to demonstrate that they have exercised judgement, not simply accepted default settings.

From experimentation to accountability

AI-powered hiring has moved rapidly from experimentation to mainstream adoption. What these lawsuits show is that legal and ethical accountability is now catching up.

The Workday case challenges assumptions about who is responsible when AI produces unfair outcomes. The Eightfold case challenges assumptions about how invisible AI processes can remain. Together they signal that AI tools are being examined as consequential systems that shape access to work, opportunity and livelihood.

For HR leaders the implication is that understanding how these systems work and how they affect candidates is no longer optional. Transparency, explainability and governance are becoming leadership requirements for today.

AI will continue to play a role in hiring but are HR leaders prepared to account for it?

At The People Space we see these cases as part of a broader reckoning about trust, technology and responsibility in people decisions. The organisations that navigate this well will be those that treat AI as a capability to be governed rather than a 'black box' to be deployed.

About the author

Sian Harrington editorial director The People Space
Sian Harrington

Business journalist and editor specialising in HR, leadership and the future of work. Co-founder and editorial director The People Space

View Full Bio

Related articles