Top 7 insights from Stanford’s AI Index 2024 every HR and people leader needs to know

6 minute read

Discover the top 7 insights from Stanford’s AI Index 2024 that every HR and people leader needs to know. Stay ahead in the future of work with key takeaways on AI adoption, performance and impact on business processes

Sian Harrington

Comic book style illustration of robots and other AI devices working alongside people in an office

Today AI systems routinely exceed human performance on standard benchmarks, say Ray Perrault and Jack Clark, co-directors of the AI Index. But, they warn: “AI technology still has significant problems. It cannot reliably deal with facts, perform complex reasoning or explain its conclusions.”

Each year Stanford University Human-Centered Artificial Intelligence Institute unveils the AI Index Report. The study provides invaluable insights into the progress and implications of artificial intelligence across various sectors, from the latest developments in technology and software and investment trends to regulatory environments and consumer perceptions. 

At 500 pages it’s a comprehensive document but we have done the reading for you and pulled out the seven most important takeaways from the report for HR and people leaders.

1. Uptake of AI in organisations today

According to McKinsey’s The State of AI in 2023: Generative AI’s Breakout Year 55% of organisations are now using AI in at least one business unit or function, up from 50% in the previous year and 20% in 2017, indicating a rapid adoption of AI technologies in business processes. 

The most commonly adopted AI use case by function in 2023 was contact-centre automation (26%), followed by personalisation (23%), customer acquisition (22%) and AI-based enhancements of products (22%). 

Across all industries natural language text understanding, robotic process automation and virtual agents are the most embedded AI technologies. 

When it comes to generative AI (large language models like ChatGPT) marketing and sales are leading use. The most frequent application of generative AI overall is generating initial drafts of text documents (9%), followed by personalised marketing (8%), summarising text documents (8%) and creating images and/or videos (8%). 

Graphic of Most commonly adopted AI use cases by function, 2023

2. AI performance vs human performance

AI outperforms humans in certain benchmarks like image classification, visual reasoning and understanding English but continues to trail humans when it comes to more complex tasks. These include competition-level mathematics, visual commonsense reasoning and planning. The report specifically highlights a study assessing GPT-4's abstract reasoning capabilities. It shows that while AI has made impressive strides, it still falls short of human performance in general abstract reasoning skills. While humans score 95% on the benchmark, the best GPT-4 system only scores 69%.

The capacity of AI for moral reasoning was tested with a new Stanford model fed with stories of human actions and measured against a dataset of human moral judgements. While no model matched human moral systems, newer, larger models like GPT-4 and Claude show greater alignment with human moral sentiments than smaller models, suggesting, say the authors, that as AI models scale they are gradually becoming more morally aligned with humans.

Integrating AI technologies with robotic systems has enhanced the capabilities of these systems to interact more effectively with humans and the real world. According to the report authors: “The fusion of language modeling with robotics has given rise to more flexible robotic systems. Beyond their improved robotic capabilities these models can ask questions, which marks a significant step toward robots that can interact more effectively with the real world."

Graphic of Select AI Index Technical Performance Benchmarks vs Human Performance
 3. Productivity 

AI tools have been shown to help workers complete tasks more quickly and with better outcomes, contributing positively to overall productivity. " Over the last five years the growing integration of AI into the economy has sparked hopes of boosted productivity. However, finding reliable data confirming AI’s impact on productivity has been difficult because AI integration has historically been low. In 2023, numerous studies rigorously examined AI’s productivity impacts, offering more conclusive evidence on the topic,” say the researchers. 

Completing tasks more quickly and producing higher quality work: A meta-review by Microsoft found that users of large language model-based productivity-enhancing tool Copilot completed tasks in 26% to 73% less time than their counterparts without AI access. Harvard Business School revealed that consultants with access to GPT-4 increased their productivity on a selection of consulting tasks by 12.2%, speed by 25.1% and quality by 40% compared to a control group without AI access. Research by the National Bureau of Economic Research showed call-centre agents using AI handled 14.2% more calls per hour than those not using AI.

Bridging the performance gap: AI can not only boost productivity but also helps narrow the performance gap between low- and high-skilled workers, thus democratising the benefits of technology across different skill levels. According to the Harvard Business School study, both groups of consultants experienced performance boosts after adopting AI, with notably larger gains for lower-skilled consultants using AI compared to higher-skilled consultants. Bottom half participants had a 43% improvement while those in the top half showed a 16.5% increase.

But while the positive impacts on productivity are clear, there's also a caution against overreliance on AI, which can lead to diminished performance if not managed properly. For example, a study focused on recruitment professionals reviewing CVs found that receiving any AI assistance improved task accuracy by 0.6 points compared to not receiving assistance. But recruiters using ‘good AI’ – believed to be high-performing – actually performed worse than those who received 'bad AI'. “The study theorizes that recruiters using ‘good AI’ became complacent, overly trusting the AI’s results, unlike those using ‘bad AI,’ who were more vigilant in scrutinizing AI output,” say Stanford’s researchers. 

Graphic of Comparison of AI work performance e  ect by worker skill category

4. Privacy and governance

There is a significant lack of standardisation in responsible AI reporting among leading developers, which complicates systematic risk assessment and comparison of AI models. Developers including OpenAI, Google, and Anthropic primarily test their models against different responsible AI benchmarks, note the authors. So it’s no wonder that concerns related to privacy, security and reliability of AI are taking precedence, with companies starting to address these concerns, although global mitigation efforts vary in effectiveness.

Indeed, the report underscores the challenges in meeting these concerns. In the Global State of Responsible AI Survey conducted by researchers from Stanford University and Accenture, 51% of organisations said that privacy and data governance–related risks are relevant to their AI adoption strategy, but less than 0.6% of companies reported they had fully operationalised six data governance mitigations that could be adopted, including steps such as securing consent for data use and conducting regular audits and updates to maintain data relevance. But nine in 10 companies self-reported that they had operationalised at least one measure, meaning 10% had yet to fully operationalise any of the measures. 

Meanwhile, while 44% of all surveyed organisations indicated that transparency and explainability are relevant concerns, when it comes to adopting four possible measures just 8% of companies across all regions and industries had fully implemented more than half of the measures, while 12% had not fully operationalised any of them. 

Mitigation of AI hallucinations (output errors) and cybersecurity risks don’t fare much better. 

Graphic of Agreement with security statements

5. Diversity and inclusion

AI has the potential to either mitigate or exacerbate biases in hiring and other HR processes, depending on how it's deployed and controlled. 

The survey asked respondents about their efforts to mitigate bias and enhance fairness and diversity in AI model development, deployment and use, providing them with five possible measures to implement. Fairness is defined as creating algorithms that are equitable, avoiding bias or discrimination, and considering the diverse needs and circumstances of all stakeholders, thereby aligning with broader societal standards of equity.

While most companies have implemented at least one measure to ensure fairness, comprehensive integration of these measures remains a challenge, the report says. This indicates the ongoing need for improvements in how AI tools are designed and utilised to support diversity and inclusion. The global average for adopted fairness measures stands at 1.97 out of five measures.

There are notable differences in how regions address AI-related fairness risks, with European organisations reporting higher relevance of these risks (34%) compared to North American organisations (20%). 

By standardising procedures and reducing human biases, AI can play a crucial role in promoting inclusion, provided that the AI systems themselves are adequately audited and adjusted for biases.

Graphic of Adoption of AI-related fairness measures by industry

6. Impact on HR 

McKinsey finds 42% of the organisations surveyed reporting cost reductions from implementing AI and 59% reporting revenue increases. There was a notable 10 percentage point increase in respondents reporting decreased costs year-over-year, which suggests that AI is driving significant business efficiency gains.

So what about in HR? According to McKinsey 9% of organisations have adopted AI in HR, while 3% are using generative AI in HR. The tech, media and telecoms industry has the highest adoption of AI in HR, at 14%, while healthcare/pharma and medical products trails at 5%. However, across HR as a whole adoption declined by 2% in 2022 vs 2023, though it jumped 8% in tech/media/telecoms and financial services. The decline was most marked in healthcare (-10%) and consumer goods/retail (-7%). 

This is perhaps surprising given the cost and revenue benefits of AI adoption in HR. Four in 10 respondents have seen cost savings, while 60% have seen revenue increases. 

However, research suggests this may soon change. According to McKinsey research HR is especially likely to face decreasing employment in the next three years as a result of generative AI. Some 41% of respondents when asked about the anticipated effectiveness of generative AI on the number of employees by business function predicted a decrease for HR – third only to supply chain management (45%) and service operations (54%). On the other side, 17% are more hopeful, anticipating a rise in HR employee numbers. 

Graphic of Cost decrease and revenue increase from AI adoption by function, 2022

7. Employee fears

Over the last year the proportion of those who think AI will dramatically affect their lives in the next three to five years has increased from 60% to 66%. There is a growing nervousness towards AI, with a 13 percentage point increase from 2022 in people expressing concern about AI products and services. Employees are particularly concerned about the impact of AI on job security, with a notable proportion fearing job displacement due to AI technologies. According to Ipsos only 37% believe AI will improve their job, 34% that it will improve the economy and 32% that it will enhance the job market.

In the US Pew Research finds 52% of Americans are more concerned than excited about AI's implications – a big rise from the 38% who expressed this is 2022.

The perception of AI and its impact varies significantly across different countries, with some populations more optimistic than others. Japanese, Swedes and Americans are generally pessimistic about AI’s potential to improve livelihoods, whereas Brazilians, Indonesians and Mexicans are more optimistic.

Younger generations like Gen Z (66% ) and millennials are more inclined to agree that AI will change how they do their jobs compared to older generations like Gen X and baby boomers (46%). Those with higher incomes, more education and decision-making roles are more likely to foresee AI impacting their current employment. The good news is that 54% of global respondents agree that AI will improve the efficiency of their tasks. 

Graphic of Global opinions on the impact of AI on current jobs by demographic group, 2023

The report authors conclude that AI faces two interrelated futures. “First, technology continues to improve and is increasingly used, having major consequences for productivity and employment. It can be put to both good and bad uses. In the second future, the adoption of AI is constrained by the limitations of the technology.”

Whichever future unfolds, HR professionals and business leaders will need to integrate AI into the workplace with care. Understanding and addressing employee fears is crucial in fostering an environment where AI is viewed as a supportive tool rather than a threat. Effective communication, transparent policies regarding AI use and initiatives aimed at reskilling and reassuring employees can help mitigate these concerns and enhance the acceptance and effectiveness of AI technologies in organisational settings.

The AI Index 2024 Annual Report, AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2024. Nestor Maslej, Loredana Fattorini, Raymond Perrault, Vanessa Parli, Anka Reuel, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Juan Carlos Niebles, Yoav Shoham, Russell Wald, and Jack Clark. 

Published 15 May 2024
Enjoyed this story?

Sign up for our newsletter here.