HR's new role is ethical guardian of data
Anyone who still thinks the primary role of HR professionals is looking after people may want to stop reading now because, in all likelihood, this will probably only be the case in a roundabout sort of way. What’s far more likely is that HR’s role will increasingly be about looking after people’s data first, acting as ethical guardian of this data at that.
For thanks to ever-growing connectivity and smart devices, it’s predicted that by this time next year, every living person will be generating 1.7 MB of data every second – all of which will add to the 2.5 quintillion bytes (that’s a one with 30 zeros) that’s already being created every single day. With some of this data coming from the likes of wellness apps, artificial intelligence (AI), sentiment analysis and natural language processing technology for recruitment and candidate selection, chatbot technology and dashboards for measuring productivity, HR’s contribution to the data mountain will be considerable.
But creating data is only half the story. Employees will only be looked after if how data is used is looked after too. “As more data is collected so starts an almost impossible process for people to actually oversee it and do quality control,” argues Dr Benjamin Bader, senior lecturer in international HRM at Newcastle University Business School. The oft-used phrase is that if you put ‘garbage in, you get garbage out’. Bader says it more delicately – “If flawed data is fed into a system, computers make flawed decisions based on this data” – but the message is the same: data could soon start being responsible for some very dodgy decision-making.
I’m not sure what value some of this technology adds, particularly when placed against the ethical issues of whether it’s right to be tracking someone’s every move. Just because we can do it doesn’t mean we should be doing it
The big HR data issues
Ethically-questionable outcomes resulting from bad data management are already starting to be seen. Tech giant Amazon was last year forced to announce it was aborting AI in recruitment, after it found its automated decision-making technology favoured men. Why? Because it was programmed taking hiring data from the last 10 years – which meant it was looking for men, not women, and not only that, it was actively downgrading female applications, despite their having all the relevant skills.
“The three biggest issues I see are around the potential for biased decision-making due to flawed or biased datasets, concerns around privacy and data access, and integration with existing systems and data sources to enable more intelligent systems,” says Ben Eubanks, author of Artificial Intelligence for HR (Make Work More Human, Not Less). “On a more fundamental level, employers have to be careful they are not creating a robotic process and eliminating the human element, because nobody wants to work for a sanitised, automated company with no soul.” So serious is concern about bias in algorithm-based technology, that in March this year independent watchdog, the Centre for Data Ethics and Innovation announced it would investigate just this, specifically the way companies use it for shortlisting candidates.
Selection technology is just one area of concern with technology. Another revolves around the use of apps designed to monitor employee activity. Although often designed with the best of intentions (some apps will alert physically inactive staff to suggest they stretch their legs and get some non-screen time), they unearth a multitude of ethical dilemmas, as Sarah Sandbrook, head of talent consulting and initiatives at Deutsche Telekom, explains: “While it’s very easy to measure inputs, it’s not so easy to measure output from a productivity point of view,” she explains, “particularly in sectors like ours, where you’re dealing with knowledge workers.” She adds: “Until we crack this I’m not sure what value some of this technology adds, particularly when placed against the ethical issues of whether it’s right to be tracking someone’s every move. Just because we can do it doesn’t mean we should be doing it.”
Firms developing tracking technology often don’t see these real-life concerns, and neither, it seems, do investors. Yet. US employee monitoring firm ActivTrak recently raised $20 million for employee monitoring software as part of its growth plans, with its CEO claiming it is “at the tipping point of a tremendous market opportunity”.
HR needs to show leadership
So the task, according to Ursula Huws, professor of labour and globalisation at Hertfordshire Business School, is for HR to show real leadership here. She says: “When you see algorithms being developed that claim to be able to detect emotional states using facial recognition software, it raises rather scary ethical questions about social control, both inside and outside the workplace. If you have a chip in you which is very handy to get you through security there’s nothing to stop the employer to track your whereabouts outside working hours as well as during them.”
Adele Hayfield, partner at legal firm Shoosmiths, agrees: “HR has a role to ensure that the data isn’t just collected but is acted upon appropriately. For this reason, it is very important that HR is involved at the design stage of any new technologies, so it can put forward these important considerations."
Luckily, there are gradual signs that leaders are taking these concerns more seriously. Some organisations, such as Google and Amazon, have started to explore the concept of creating ethics boards. And technology is also being developed that promises to actively correct human bias. For instance, augmented language analysis technology firm, Textio, analyses whether what’s written in firms’ recruitment ads could unwittingly exclude certain groups from applying just by the way they’re written. So impressed has fast-food giant McDonald’s been, it’s just announced it is partnering with the company to ensure hiring managers craft gender-neutral job postings.
At Deutsche Telekom, HR now works closely with the firm’s data privacy team and Workers’ Council to specifically define what is and isn’t ethically acceptable. Sandbrook also confirms she houses her own digital and innovation function within the HR team. “At one point the leader of this group allowed himself to be chipped,” she says. “At the time people weren’t sure about it, but the more we’re able to collect and analyse data, the more we can make sensible decisions about what we do [with it]. But” she adds, “it’s not just down to HR to play policeman. HR should ask the questions but it’s a business issue too.”
Establishing algorithmic accountability
It seems like establishing “algorithmic accountability” – as termed by Dr Thomas Calvard, lecturer in human resource management at the University of Edinburgh Business School – will increasingly be an inescapable part of HR’s future role. He says: “Businesses need a way to lift up the bonnet, and ask questions about how algorithms work and what they’re based on. For instance, health insurance. Will algorithms soon start to decide who doesn’t get it, because certain employees are deemed uninsurable? And is this right?” He adds: “In principle having a data governance structure – where HR has some form of committee or board with some oversight on key trends and issues – must be a positive step. But HR must also develop relationships with other functions, like IT and marketing, which have already gained some experience in the use of data. Only this way will it ensure data ethics stays on the boardroom agenda.”
As with all new technology, there are new – and often unanticipated – challenges. But anticipate them HR professionals must. “Every new technology we see in the consumer world has impacted the world of HR,” says Josh Bersin, founder of Bersin by Deloitte. “From chatbots that are regularly used to help job candidates screen for positions, to tools that measure social influence, location and travel schedules, they’re all producing data that managers and HR people need to understand to make work better.”
In the longer term, Huws argues it’s eminently likely we could see regulation, to specifically establish a base-level of data usage standards. She adds: “Historically we tend to see pendulums swing, and if the situation becomes intolerable, people react against it.” She concludes: “We probably need some regulatory floor below which standards can’t fall, so a level playing field of basic workers’ rights is set. That’s when everyone knows what data is being used for and whom it’s being shared with.”
What did you think about this content? Use the stars below to give it a rating out of five.