AI in the UK workplace: key employment and privacy considerations
While there is currently no specific regulatory framework in the United Kingdom governing the use of artificial intelligence (AI), it is partially regulated. With a number of employment laws and data privacy implications, the use of AI is a growing area of focus for regulators and the UK government. A UK government white paper on the topic of AI regulation is expected in late 2022. In the meantime there are several considerations for employers implementing AI technology throughout the employment lifecycle.
There is no single, recognised definition for AI. Broadly, AI is understood as an umbrella term for a range of algorithm-based technologies that solve complex tasks by carrying out functions that previously required human thinking.
In the employment context, accelerated by the COVID-19 pandemic, AI is increasingly being used in all stages of the employment relationship. In recruitment, in particular, there are clear benefits in using AI to reduce resource time and costs. AI technology can be used to review and filter job applications and even to assess interview performance by using natural language processing and interview analytics to determine a candidate’s suitability in light of their soft skills and personality traits. In turn, this reduces the amount of time that needs to be spent on these tasks by talent sourcing specialists and human resources, allowing them to focus on other valuable work.
Despite these benefits, there are some key risks and associated safeguards that employers in the United Kingdom should consider before implementing AI technology in their employment cycle.
Key legal risks: discrimination
Under the Equality Act 2010, it is unlawful for an employer to discriminate against candidates or employees on the basis of “protected characteristics” (namely, age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex, or sexual orientation). The use of AI can result in indirect discrimination claims where someone with a protected characteristic suffers a disadvantage as a result of an algorithm’s output.
An algorithm that demonstrates a preference for particular traits, which in practice results in men being favoured over women in recruitment selection, is an example of indirect discrimination on the basis that the algorithm places women at a substantial disadvantage because of their sex. To defend such a claim, the employer would need to show the use of AI was a proportionate means of achieving a legitimate aim. While the use of technology to streamline the recruitment process may be a legitimate aim, it is difficult to see how such a tool, which can have significant implications for a candidate, can be a proportionate means of achieving that aim without any human oversight.
The use of AI may also create other legal risks for employers, such as:
In the United Kingdom compensation for unlawful discrimination is uncapped (although tribunals take into account the ‘Vento bands’ when assessing compensation, the current upper band of which is approximately £50,000 ($59,416) for the most serious cases of unlawful discrimination).
Key legal risks: data protection
It is likely that the use of AI during the employment lifecycle will involve the processing of candidate and/or employee personal data. Employers should therefore be mindful of their obligations under data privacy regulation, with particular regard to three key principles: (1) lawfulness, (2) fairness and (3) transparency.
The use of AI technology to make employment decisions, without human scrutiny, will fall within the scope of a “solely automated decision.” The UK General Data Protection Regulation (GDPR) and Data Protection Act 2018 restrict an employer from making solely automated decisions that have a significant impact on data subjects unless this is:
Even then, except where it is authorised by law, specific safeguards must be in place, such as a mechanism for the individual to challenge the decision and to obtain human intervention with respect to the decision. (Essentially, this is a human appeal process).
The processing of special category personal data, such as health or biometric data, is further restricted unless on specific lawful grounds.
Any use of AI is likely to require a data protection impact assessment. If high risks to the rights of individuals cannot be mitigated, prior consultation with a relevant supervisory authority (such as the Information Commissioner’s Office (ICO) in the United Kingdom) is required and the AI technology cannot be deployed without the consent of the supervisory authority.
The ICO has issued guidance and an AI toolkit to assist organisations in identifying and mitigating risks arising from the use of AI technology.
Mitigating the risks
Notwithstanding the risks outlined above, the use of AI technology is developing rapidly and there are a number of steps employers can take to introduce innovative technology while minimising legal risk, including:
AI is likely to continue to be a hot topic in 2023 and beyond. A number of countries have recently introduced AI-focused legislation. In New York State recent legislation effective 1 January 2023 will prohibit the use of AI tools in employment decisions unless it has been subject to a “bias audit” and use is disclosed, with the opportunity to request alternative processes. In 2021 the European Commission proposed implementing a new legal framework to address risks of AI use, which would set out requirements and obligations regarding the use of AI, high-risk applications of the technology and set out an enforcement and governance structure. In Germany employers already have an obligation to consult with the Works’ Council (a consultative body representing workers) when introducing AI in the workplace.
In the United Kingdom, there have been calls by the Trade Union Congress in a report recommending measures to protect against algorithm discrimination, including:
The UK government published the AI Regulation Policy Paper on 18 July 2022, which forms part of the UK government’s National AI Strategy and its AI Action Plan. A white paper on the use of AI is expected by the end of the year and employers can expect further guidance to emerge in the coming months.