AI in the UK workplace: key employment and privacy considerations

5 minute read
Regulation of artificial intelligence to make it more human-centric and mitigate against harmful algorithms is appearing around the world and the UK is expected to follow suit. But why is legislation needed – and what is already in place? Louise Skinner, Pulina Whitaker and Jessica Rogers from global law firm Morgan Lewis answer these questions for The People Space

Artificial intelligence and regulation

While there is currently no specific regulatory framework in the United Kingdom governing the use of artificial intelligence (AI), it is partially regulated. With a number of employment laws and data privacy implications, the use of AI is a growing area of focus for regulators and the UK government. A UK government white paper on the topic of AI regulation is expected in late 2022. In the meantime there are several considerations for employers implementing AI technology throughout the employment lifecycle.

There is no single, recognised definition for AI. Broadly, AI is understood as an umbrella term for a range of algorithm-based technologies that solve complex tasks by carrying out functions that previously required human thinking.

In the employment context, accelerated by the COVID-19 pandemic, AI is increasingly being used in all stages of the employment relationship. In recruitment, in particular, there are clear benefits in using AI to reduce resource time and costs. AI technology can be used to review and filter job applications and even to assess interview performance by using natural language processing and interview analytics to determine a candidate’s suitability in light of their soft skills and personality traits. In turn, this reduces the amount of time that needs to be spent on these tasks by talent sourcing specialists and human resources, allowing them to focus on other valuable work.

Despite these benefits, there are some key risks and associated safeguards that employers in the United Kingdom should consider before implementing AI technology in their employment cycle.

Key legal risks: discrimination

Under the Equality Act 2010, it is unlawful for an employer to discriminate against candidates or employees on the basis of “protected characteristics” (namely, age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex, or sexual orientation). The use of AI can result in indirect discrimination claims where someone with a protected characteristic suffers a disadvantage as a result of an algorithm’s output.

An algorithm that demonstrates a preference for particular traits, which in practice results in men being favoured over women in recruitment selection, is an example of indirect discrimination on the basis that the algorithm places women at a substantial disadvantage because of their sex. To defend such a claim, the employer would need to show the use of AI was a proportionate means of achieving a legitimate aim. While the use of technology to streamline the recruitment process may be a legitimate aim, it is difficult to see how such a tool, which can have significant implications for a candidate, can be a proportionate means of achieving that aim without any human oversight.

The use of AI may also create other legal risks for employers, such as:

  • Disabled people may face particular disadvantages in undertaking automated processes or interviews. For example, some systems read or assess a candidate’s facial expression or response, the level of eye contact, voice tone and language, which could disadvantage candidates with visual or hearing impairments, those on the autism spectrum or with a facial disfigurement. Given the obligation under UK law to make reasonable adjustments to remove disadvantages for disabled people, an employer could potentially find themselves in breach of discrimination laws when using AI software as a blanket approach.
  • Language and tone of voice can also be more difficult for some whose first language is not English, increasing the risk of racial bias and unlawful discrimination claims on the basis of race.

In the United Kingdom compensation for unlawful discrimination is uncapped (although tribunals take into account the ‘Vento bands’ when assessing compensation, the current upper band of which is approximately £50,000 ($59,416) for the most serious cases of unlawful discrimination).

Key legal risks: data protection

It is likely that the use of AI during the employment lifecycle will involve the processing of candidate and/or employee personal data. Employers should therefore be mindful of their obligations under data privacy regulation, with particular regard to three key principles: (1) lawfulness, (2) fairness and (3) transparency.

The use of AI technology to make employment decisions, without human scrutiny, will fall within the scope of a “solely automated decision.” The UK General Data Protection Regulation (GDPR) and Data Protection Act 2018 restrict an employer from making solely automated decisions that have a significant impact on data subjects unless this is:

  • authorised by law;
  • necessary for a contract; or
  • where explicit consent was given.

Even then, except where it is authorised by law, specific safeguards must be in place, such as a mechanism for the individual to challenge the decision and to obtain human intervention with respect to the decision. (Essentially, this is a human appeal process).

The processing of special category personal data, such as health or biometric data, is further restricted unless on specific lawful grounds.

Any use of AI is likely to require a data protection impact assessment. If high risks to the rights of individuals cannot be mitigated, prior consultation with a relevant supervisory authority (such as the Information Commissioner’s Office (ICO) in the United Kingdom) is required and the AI technology cannot be deployed without the consent of the supervisory authority.

The ICO has issued guidance and an AI toolkit to assist organisations in identifying and mitigating risks arising from the use of AI technology.

Mitigating the risks

Notwithstanding the risks outlined above, the use of AI technology is developing rapidly and there are a number of steps employers can take to introduce innovative technology while minimising legal risk, including:

  • Ensuring they have fully trained, experienced individuals responsible for the development and use of AI to minimise the risk of bias and discrimination. The provider of the technology should be able to demonstrate that the data and algorithms have been stress-tested for bias and discrimination against candidates because of, for example, their gender, race or age, and disparate impact assessments should be conducted on a regular and ongoing basis.
  • Establishing clear and transparent policies and practices around the use of AI in recruitment decisions.
  • Identifying appropriate personnel to actively weigh up and interpret recommendations and decisions made by AI in the recruitment process before applying it to any individual. It is important that meaningful human review is carried out; data privacy restrictions cannot be avoided by simply “rubber-stamping” automated decisions.
  • Not solely relying on AI—ensuring that AI is used only as an element to assist in recruitment decisions.
  • Ensuring that the process allows for human intervention: if a candidate needs adjustments because of a disability, make it clear with whom and how they should make contact to discuss what might be required.
  • Implementing ongoing equality impact assessments to identify early any issues or negative impact on diversity and inclusion as a result of the introduction of AI technology.
  • Prior to implementing AI, considering whether a data protection impact assessment is required. Additionally, employers can utilise the ICO’s AI toolkit to assess risk and implement mitigating measures.
  • From a data privacy perspective, considering and identifying the lawful basis for processing personal data in this way before proceeding with any automated profiling or decision making.
  • Updating candidate and employee privacy notices to make clear the use of AI technology in the processing of personal data.

Future developments

AI is likely to continue to be a hot topic in 2023 and beyond. A number of countries have recently introduced AI-focused legislation. In New York State recent legislation effective 1 January 2023 will prohibit the use of AI tools in employment decisions unless it has been subject to a “bias audit” and use is disclosed, with the opportunity to request alternative processes. In 2021 the European Commission proposed implementing a new legal framework to address risks of AI use, which would set out requirements and obligations regarding the use of AI, high-risk applications of the technology and set out an enforcement and governance structure. In Germany employers already have an obligation to consult with the Works’ Council (a consultative body representing workers) when introducing AI in the workplace.

In the United Kingdom, there have been calls by the Trade Union Congress in a report recommending measures to protect against algorithm discrimination, including:

  • A reversal of the burden of proof for AI use, where the employer must disprove that discrimination occurred rather than the claimant bearing the burden of proof;
  • The creation of statutory guidance on steps that may be taken to avoid discrimination where AI is used; and
  • Mandatory AI registers to be regularly updated by employers and available to candidates.

The UK government published the AI Regulation Policy Paper on 18 July 2022, which forms part of the UK government’s National AI Strategy and its AI Action Plan. A white paper on the use of AI is expected by the end of the year and employers can expect further guidance to emerge in the coming months.

Louise Skinner, Pulina Whitaker (partners) and Jessica Rogers (associate), pictured below, are from global law firm Morgan Lewis 

Louise Skinner, Pulina Whitaker and Jessica Rogers, Morgan Lewis

Published 6 December 2022
Enjoyed this story?

Sign up for our newsletter here.