It's the big HR ethics question: can algorithms ever be less biased than humans?

5 minute read

Humans are biased so is it ok for algorithms to also be biased? Are we expecting too much from artificial intelligence in this area? After all, it is programmed by humans. Whether you agree or not, it is ultimately up to HR directors to prove that decision making is non-discriminatory

Image
Shoosmiths

Improving your business through employment advice

Bias in algorithms

“Put your hands up if you hire referrals, ex-colleagues or other people that have known people you work with? This is how I like to tackle criticisms that artificial intelligence (AI) is biased,” announces Mathias Linnemann, co-founder and CCO at Worksome, a contractor platform that uses AI to match freelancers to roles. “The reality is, humans are biased. They always have been. There is a tonne of bias in recruitment.”

You would expect an AI-based recruitment business to say this. But Linnemann then adds: “The reality is, AI is biased too – because humans programme algorithms. So, shouldn’t the real question be – is AI any less biased, and even if it’s only a little bit, is this OK?”

As an opening salvo about the state of AI and bias in HR, Linnemann is not messing around. Even though strict anti-discrimination laws exist (against protected characteristics including age, sex, ethnicity, disability and gender), to accuse recruiters of institutional bias is bold. But, with stories of AI supposedly ‘going wrong’ [such as Amazon ditching hiring algorithms because it modelled what previous good hires were, and so simply had a bias for recruiting more men], he suggests this might simply be par for the course, and certainly no worse than how humans hire anyway. In essence, what’s the big problem?

The issue of course, is the promise AI infers. Whether it’s analysing hundreds of microscopic facial movements to unemotionally deduce traits such as trust (such as through video hiring technology), or machine learning to create a picture of candidates based on what they write on social media, the promise is tantalising: repeatable decision making that’s unimpeded by prejudice or background, colour or class. Talent, it argues, will out.

AI has raised expectations about removing bias

“The reality is that AI has raised expectations about removing bias, when really it isn’t possible,” argues Dean Sadler, CEO of recruitment company TribePad. “An innate problem with AI is that the more complex an algorithm becomes, the less you know how a decision has even been made, so anyone adopting AI really needs to be comfortable with what they’re unleashing – for instance, whether there’s sufficient reporting to at least guarantee that legal protections haven’t been contravened.”

The problem HR directors face is that they need a process that protects diversity but beyond that almost any selection criteria they set could be deemed biased. “We’ve just launched a ‘background blind’ graduate recruitment portal that asks people questions and requires them to go through simulations to analyse their responses,” says Dan Richards, recruiting director (UK & Ireland) at multinational professional services firm EY.

“It’s a scoring matrix, looking at capability, resilience and future potential based on psychometrics rather than background. This gets people to the next hurdle.”

For Richards though, he’s wary of going full-on to automatic decision making. “It’s not quite AI yet. We don’t think the technology’s ready. I’m not saying AI is any more biased, but we’re sticking to human decision making after this.”

This caution is starkly revealed by recent research by SAS and Accenture Applied Intelligence, which finds 60% of those questioned believed AI has the potential to provide more accurate decision making – but maybe not today. A significant 61% said they could not trust their organisation to use AI ethically. It also found 92% thought ethics training was needed for successful AI deployment.

Part of the problem is that, as well as hiring people, recruiters need to solve problems such as why certain talent leaves within a year and not others. In trying to solve this, they could start modelling for certain people to the exclusion of others. EY’s Richards admits the business still has benchmarks around what the organisation thinks ‘good’ looks like and recruits on this basis. “That’s a reasonable recruitment objective,” he says. But would that be reasonable to specifically programme into an AI algorithm?

Humans are biased too, but Ai shouldn't be sole determinant

“We probably do need to accept that all recruitment involves some level of bias when based on human decision making, even though it should not. AI doesn’t necessarily avoid this. When algorithms start selecting out against certain criteria, the decision trail becomes more visible and could leave a business open to discrimination complaints if those criteria disadvantage a particular protected group” claims Michael Briggs, partner at national UK law firm Shoosmiths.

Adds colleague Antonia Blackwell: “HR directors want a tool that doesn’t introduce bias, but at the moment the technology doesn’t seem to be sophisticated enough to achieve this so that, at least for the foreseeable future, AI shouldn’t be the sole determinant, there should still be a level of human decision making – even if that itself carries bias! As much as possible, any algorithm that is used should look to ensure no groups are excluded.”

One organisation that believes it is removing bias using AI is Unilever. It is opening project-based opportunities to internal talent by attempting to match them to people who have updated a personal skills profile, including containing information about what skills they’d like to develop.

Yanpi Oliveros-Pascual, Unilever global HR partner, explains: “We definitely feel this democratises how people are matched to projects. It’s division-, country- and function-agnostic – it simply aims to find the needle in the haystack; it makes judgements simply based on skills, rather than who someone knows.”

While only being used for short projects where contractors might otherwise be used, it does at least build a culture of finding the best talent – and if AI can do this, it definitely has a future.

It's all about algorithmic management now

“Algorithmic management is where organisation will need to go,” says Peter Cappelli, director Centre for Human Resources at The Wharton School of Management. “Where firms are looking for applicant characteristics, for instance those associated with better job performance, algorithms should only make recommendations, not decide fully. This is what the likes of IBM do.”

But he adds: “In certain cases, research is now finding machine learning algorithms can do a better job than humans, such as in hiring white-collar workers, where inconsistent human decision making introduces too much variation.

“In a study by Columbia Business School assistant professor Bo Cowgill, candidates selected by a machine for face-to-face interview were 14% more likely to pass and receive a job offer, 18% more likely to accept an extended job offer and 0.2-0.4 standard deviation more productive once hired.”

Certainly, AI can – if programmed correctly – reduce bias against protected characteristic jobseekers. The British Transport Police applies blind recruiting principles to all applicants to ensure bias in the selection of candidates is reduced, including monitoring the numbers of applications coming in across race (ethnicity), sexuality, faith, gender, disability or background. This enables it to react to where it is having any talent scarcity issues. Since it began using the Oleeo system in 2014, 9% of its police officers, 14% of its special officers, 17% of its PCSOs and 23% of its police staff are now from ethnic minorities, while 53% of staff are now women.

Of course, some might simply say this is bias as positive discrimination, which shows how in even attempting to remove bias, new bias is added. “We’re in very interesting times, but it’s ultimately up to HR directors to prove that their decision making is non-discriminatory,” says Shoosmiths’ Blackwell. “At the very least,” adds colleague Briggs, “organisations need to carry out an impact assessment of any AI technology they use, and continually test and refine it, so that, by trying to solve one problem, no additional biases are introduced.”

Will AI ever be fully better than humans? “Unlikely,” argues Linnemann. But he says: “In that way, perhaps recruitment itself needs to adapt. HR directors should arguably be moving away from selecting people to fit a job role, to selecting people that can solve problems. This moves the lens away from introducing subjective and biased interpretations, like ‘will this person get on with team members?’ – a question that might be less relevant when search criteria are reframed in this way.”

Published 27 November 2019

When algorithms start selecting out against certain criteria, the decision trail becomes more visible and could leave a business open to discrimination complaints if those criteria disadvantage a particular protected group

Enjoyed this story?
Sign up for our newsletter here.