Artificial intelligence will soon become an intrinsic part of what makes a successful organisation. According to Rowan Curran, analyst at research and advisory company Forrester, AI is turning into an indispensable, trusted enterprise co-worker.
Meanwhile, one in four organisations plans to start using or increasing their use of automation or artificial intelligence (AI) in recruitment, and one in five plans to use these tools in performance management in the next five years, according to research by SHRM (the Society for Human Resource Management). Already nearly one in four organisations reports using automation or artificial intelligence (AI) to support HR-related activities, including recruitment and hiring, finds the research.
However, companies have suffered reputational and legal damage when their algorithms have led to discriminatory and privacy-invading outcomes. In human resources bias and discrimination in recruitment is one such outcome. And often the companies who deployed the AI can’t explain why the black box made the decision it did.
This, says Reid Blackman, is why companies should be focusing on “AI for not bad” rather than “AI for good”. As one of the world’s foremost experts on ethical AI and author of Ethical Machines, Blackman works with companies to integrate ethical risk mitigation into the development, procurement and deployment of emerging technology products. And he says that, while the idea of AI for good is a great idea, what most people should focus on is “not ethically screwing up, frankly”.
As he explains: “There are lots of ethical reputational and regulatory pitfalls when you're designing, building, procuring and ultimately deploying AI. And ethical rule number one is do no harm. And business rule number one is do no harm to your brand and your bottom line. It makes sense to me to say, look, when you talk about AI ethics, the first useful distinction to make is between ‘AI for good’ on the one hand and ‘AI for not bad’ on the other. The goal is to not do ethically bad things on the road to achieving our ordinary business objectives.”
The challenge is that this is not a case of the bad behaviour of data scientists or engineers purposely building in bias and the like into the algorithms. It’s what Blackman calls the “little decisions made in the design, build, deployment and maintenance throughout the AI lifecycle.”
Most data scientists or engineers don't know about those issues, he says. And if they do, they might not be empowered by their employer to investigate them. Or they may not have the resources to mitigate the risks even if they spot them.
He points to the now infamous case of Amazon and hiring bias. The AI-based system read resumes and green-lighted those of men and red-lighted those of women for interview as a result of biased data fed into the system. “They tried hard to debias it. They couldn't do it for various reasons. And then they scrapped the project, which is an ethical win in my book. They were intending to do not bad but they couldn't pull it off because they lacked either the technical know-how or the institutional support or both,” he says.
So how can businesses, and HR in particular, focus on “AI for not bad”? Blackman says there are three considerations when it comes to ethical risks: bias, explainability and privacy.
- Bias
AI works by recognising complex patterns and data. As demonstrated in the Amazon case, these patterns may be discriminatory – and then you get a biased or discriminatory outcome. One of the reasons why it's hard to get rid of bias is because of what are called proxy variables – variables that serve as a proxy for protected class characteristics, like race or gender. For example, the use of the word “executed” in CVs is more common among men. To tackle this Blackman says the first thing to do is identify if your data is biased and discriminatory in some way that you find unacceptable. You will never get a completely unbiased system so it is about mitigating the bias as well as possible. He suggests setting what you think is an appropriate benchmark to sufficiently debias the data so it is fine to use. “It’s a qualitative decision. It's an ethical decision. It's a reputational decision. It's a regulatory or legal decision. It's a business decision. And so that's not a question that should be left in the hands of data scientists alone.”
- Explainability
Can you explain the decision the algorithm has made? As we said, organisations use AI because it can gain insights from vast troves of data by recognising patterns that are too complex for us to recognise. But, says Blackman, the problem is that this pattern may be too complex for us to understand what is happening between the inputs and the outputs. Here HR needs to think about its use case. If it is a decision about who is worthy of an interview, for example – what Blackman calls “high stakes” decisions – then it’s important to be able to explain it. But, he says, the way organisations address explainability is insufficient. “It's often not data scientists but other stakeholders who need explanation. So the job candidate who was denied an interview - you need to explain how your AI is working to those people, not to mention to the regulators. There still needs to be a translation from that technical talk to layman's talk.
- Privacy
As the fuel of AI is data and this data is often about people, in order to get AI to work really well, the more data the better. So organisations are highly incentivised to gather up as much data about people as they can, which can cause a privacy problem. Secondly, the nature of AI or machine learning is to make predictions, and these predictions lead to new knowledge about people – in other words, new data, which arguably constitutes a violation of privacy because you're inferring data about the person. Finally there are applications such as facial recognition software where, in a public setting, it is likely no one has given informed consent for you to scan their faces.
To embed ethical AI principles into your organisation Blackman says the first step is to write an AI ethics statement that “is not totally useless, which the vast majority of people or organisations that have an ethics statement have not done.”
Ethics statements are too high level and too abstract, he says. “We're for fairness, we're for transparency.” Instead these statements need to have teeth. If you say fair you need to explain what fairness means in this situation, for example.
“It needs to be tied to some action guardrails,” explains Blackman. “For instance, if you say we respect people's privacy what does this mean you will always do? And what will you never do? For example, we respect people's privacy and so we will never sell your data to a third party. That's a guardrail.”
It is also important to have a human in the loop. “Many organisations want automated tools and ask whether they can just automate this process. There are various kinds of automated automation tools you can use for ethical risk mitigation but, as we tell our clients, the more you automate, the more the risk goes up. You really need a well-trained experienced set of people involved in this process.”
AI is still in its infancy in terms of deployment at scale and yet we have already seen ethical risks being realised in organisations. Regulation will play an important role in digital transformation in general as the impacts of AI become more obvious in practice, but organisations need to act now to ensure that the AI they are using – or plan to use – advances their objectives instead of undermining them.
Reid Blackman is CEO of ethical risk consultancy Virtue and author of Ethical Machines, published by Harvard Business Review Publishing
Ethical rule number one is do no harm. And business rule number one is do no harm to your brand and your bottom line. The goal is not to do ethically bad things on the road to achieving our ordinary business objectives