Why ethical HR and artificial intelligence are boardroom priorities today

3 minute read

The integration of AI in HR raises significant concerns around data privacy, misuse of technology and security vulnerabilities, making ethical HR and AI top priorities in boardroom discussions says chief HR officer Kate Bishop

Illustration of a man in suit with AI head and judicial scales

The topic of ethical HR remains one of the most contentious and hotly-debated around boardroom tables today. And when you add the ever-growing power and influence of artificial intelligence (AI) and the latest AI-driven large language models like Bard and ChatGPT, the discussion becomes more intricate.  There are potential issues around data privacy, employee misuse of technology and security vulnerabilities. Taken together these factors elevate the concerns well beyond the confines of HR and make it a pressing topic for the c-suite as a whole.

The reality gap

Despite the many potential business applications of AI, chief human resource officers (CHROs) and their fellow board members need to ask themselves: do we need to jump on every new technology for fear of being left behind, or should we instead carefully evaluate the business impact of every new application we put in place? 

Humanisation is a crucial element in this context, necessitating a moment of intervention. It is essential for individuals in positions of authority to take a pause and critically evaluate: "what exactly have we initiated here, and how fast is it progressing?”

One concern is the growing gap between the acknowledged need for trustworthy AI in businesses and the actual implementation on the ground. In line with this, a recent global study sponsored by Progress and conducted by Insight Avenue found that while 78% believe data bias will become a bigger problem as AI/ML use increases, just 13% are currently addressing data bias and have an ongoing evaluation process to weed it out.

As the latest adaptive and generative AI tools emerge and mature in terms of capability, the need to have proper guiderails in place for this kind of capability, especially to ensure data is trustworthy and unbiased, will become ever more urgent.

The advent of AI-driven anomaly detection

AI can make a big difference here. According to Grand View Research, the machine learning & artificial intelligence segment of anomaly detection is expected to witness significant growth at a CAGR of 18.7% from 2023 to 2030.

AI-driven anomaly detection brings huge benefits to businesses. Algorithms can analyse vast datasets to gain deep insights into customer preferences and behaviour. This invaluable knowledge allows organisations to tailor their services with precision, creating unparalleled customer experiences. But at the same time, in the context of ethical HR, it brings challenges which businesses need to address.

Anomaly detection, for instance, often involves analysing detailed employee data. Ethical HR practices would require clear communication about what data is collected, how it is used and stringent measures to protect employee privacy. In addition, AI systems can perpetuate and amplify biases present in their training data. In this context ethical HR must ensure that AI-driven systems are transparent and regularly audited for bias.

_______________________________________________________________________________________________________________

Badge New with solid fill PREMIUM CONTENT THE 7 INSIGHT SKILLS FOR FUTURE FIT HR: INTEGRITY-DRIVEN DATA STEWART

_______________________________________________________________________________________________________________

The human touch 

Ultimately with any current AI technology there must always be a human in place overseeing its usage. AI can never operate on its own without having a person at some stage of the process who can intervene and decide on the final course of action.

Even here, however, the truth is not clear-cut. We often hear the view that the data that fuels AI can be biased but people have their own biases too. That’s why the overseeing job may be best suited to CHROs who can work together with CEOs and other business leaders to ensure that while benefiting from AI their organisation always uses it with all due concern for fairness and integrity.

Implementing ethical HR

There are a range of measures CHROs could start to take to help promote the use of ethical HR across the department and the wider business. This could include emphasising the importance of creating a set of ethical guidelines specifically for AI use in HR. These guidelines should cover the aforementioned risks associated with data privacy, bias prevention and transparency in AI decision-making processes.

Additionally, there is a need for ongoing training for HR professionals and employees about the ethical use of AI. This includes understanding AI capabilities, limitations and the importance of maintaining human oversight. Furthermore, it is crucial to highlight the need for transparent AI systems where employees understand how and why AI is used in HR processes. Finally, they could propose the establishment of feedback mechanisms, where employees can report concerns or suggestions regarding AI in the workplace.

Understanding the way forward

The integration of AI in HR and broader business processes represents a strategic imperative that demands ethical consideration and human oversight. As AI continues to evolve its role in business is becoming increasingly significant. 

The challenge for CHROs and business leaders is to harness the power of AI responsibly, ensuring it aligns with ethical standards and contributes positively to organisational goals. In doing so they can leverage AI - not just as a tool for innovation and efficiency, but as a cornerstone of ethical and sustainable business practices.

Kate Bishop, pictured below, is chief human resources officer at IFS

 

Kate Bishop, CHRO IFS

Published 14 February 2024
Enjoyed this story?

Sign up for our newsletter here.