AI governance: latest update from around the world

5 minute read
The rapid advancement of artificial intelligence (AI) in 2023 has spurred efforts by national governments and global organisations to establish frameworks and regulations to ensure the safe, ethical and responsible development and use of AI technologies. The People Space outlines the most significant
Sian Harrington

Illustration of a robot with legal scales

“AI could be either a blessing or a curse. The future of AI remains unwritten, and it will depend on the choices we make in the current generation.” So says professor Daron Acemoglu, Elizabeth and James Killian professor of economics at the Massachusetts Institute of Technology.

It will also depend on the strength of the regulation and frameworks implemented by national governments and international non-governmental organisations. And while it has long been recognised that legislation is far behind the pace of AI development, in 2023 we have seen some major strides forward in this area. Here are some of the most significant:

  • United Nations: A UN Advisory Body on AI was launched by UN Secretary-General António Guterres on 26 October 2023. This body brings together experts with deep experience across government, the private sector, technology, civil society and academia to support the UN in its efforts to ensure that AI is used for the greater good of humanity. Members will examine the risks, opportunities and international governance of these technologies.
     
  • World Health Organization : The WHO released a publication in October 2023 listing key regulatory considerations on AI for health. The publication emphasises the importance of establishing AI systems' safety and effectiveness, rapidly making appropriate systems available to those who need them, and fostering dialogue among stakeholders  including developers, regulators, manufacturers, health workers and patients.
     
  • European Union: The EU AI Act, the world’s first comprehensive AI law proposed in 2021, was expected to be finalised by the end of 2023. However, France, Germany and Italy disagree on the proposed approach to foundation models, threatening to derail the act. They argue that the proposed code of conduct which would initially only be binding for major AI providers, primarily from the United States, could have the drawback of reducing trust in, and therefore customer numbers, for smaller European providers who would initially have a competitive advantage. Many hope the act will receive formal agreement on 6 December but if a deal isn’t reached it will have to wait until the election of a new EU parliament and commission in 2024. The Act aims to establish a risk-based approach to AI regulation, categorising AI systems into four risk categories: unacceptable risk, high risk, limited risk and minimal risk. High-risk AI systems, such as those used in healthcare or autonomous vehicles, would be subject to strict requirements, including human oversight and explainability.
     
  • United States: In October president Biden signed an Executive Order requiring more transparency from AI companies about how their models work. The Whitehouse order will establish a raft of new standards, most notably for labelling AI-generated content. AI companies will use this guidance to develop labelling and watermarking tools that the White House hopes federal agencies will adopt. The guidance is based on The National Institute of Standards and Technology’s AI standards and guidelines. The Executive Order establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition and advances American leadership around the world. However, it is not specific about how the rules will be enforced and is not legislation, so vulnerable to being overturned.
     
  • United Kingdom: The UK government released its White Paper on AI Innovation in March 2023, outlining five key principles for AI: safety, security, robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress. These principles aim to guide the UK's approach to AI, with a direct focus on developing an environment for AI development while addressing potential risks. Regulators will issue practical guidance to organisations, as well as other tools and resources like risk assessment templates, to set out how to implement these principles in their sectors. When parliamentary time allows, legislation could be introduced to ensure regulators consider the principles consistently.  UK prime minister Rishi Sunak hosted the first AI Safety Summit in November, culminating in 29 countries, including the US, China and the EU, signing the Bletchley Declaration agreeing to work together to ensure AI is designed and deployed responsibly. However, despite this, the UK first minister for AI and intellectual property, Jonathan Camrose, said at a Financial Times conference that there would be no UK law on AI “in the short term” because the government was concerned that heavy-handed regulation could curb industry growth. 

The UK’s trade union body, the TUC, launched an AI task force in September 2023 calling for “urgent” new legislation to safeguard workers’ rights and ensure AI “benefits all”. Members of that committee will include Tech UK, the Chartered Institute of Personnel and Development (CIPD), the University of Oxford, the British Computer Society, CWU, GMB, USDAW, Community, Prospect and the Ada Lovelace Institute. The taskforce aims to publish an expert-drafted AI and Employment Bill early in 2024 and will lobby to have it incorporated into UK law.

  • Canada: The Canadian government released an Artificial Intelligence and Data Act (AIDA) in 2023, which proposes a risk-based approach to AI regulation. It will set the foundation for the responsible design, development and deployment of AI systems that impact the lives of Canadians. The act would ensure that AI systems deployed in Canada are safe and non-discriminatory and would hold businesses accountable for how they develop and use these technologies. The Act is still under review but it is expected to be finalised in 2024.
    In September 2023, Minister of Innovation, Science and Industry François-Philippe Champagne announced the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems.  This code temporarily provides Canadian companies with common standards and enables them to demonstrate, voluntarily, that they are developing and using generative AI systems responsibly until formal regulation is in effect. It aims to help strengthen Canadians' confidence in these systems.
     
  • China: China is at the forefront of developing AI regulations. The country has implemented some of the world's earliest and most detailed rules governing AI. These regulations cover a wide range of AI applications, including recommendation algorithms, synthetically generated images and chatbots. China's AI governance framework is expected to have a significant impact on the development and deployment of AI both within China and internationally. In April 2023 the Chinese government released the Trial Measures for Ethical Review of Science and Technology Activities (Draft Ethical Review Measure), focusing on the ethical review of science and technology activities that have ethical risks, such as the research and development of AI technologies. On 15 August Measures for the Administration of Generative Artificial Intelligence Services took effect. It stresses that equal attention shall be paid to development and security, while innovation promotion and lawful regulation shall be combined to encourage the innovative development of generative AI through effective measures. China already has Algorithm Recommendation Regulation and Deep Synthesis Regulation. These, together with the Draft Ethical Review Measure (when enacted) and Generative AI law combine to govern AI-related services and products in China while  work continues on a comprehensive AI Law. Interestingly China says that AI must reflect socialist core values.
     
  • Japan: The Japanese government released a draft of the Basic AI Strategy 2023 in October. The draft outlines 10 basic rules for AI-related businesses, such as ensuring fairness and transparency with regard to protecting human rights and preventing personal information from being given to third parties without an individual's permission. To date Japan has taken a soft approach to AI regulation (as has Israel), choosing to wait and see how the technology develops rather than implementing strict rules that could stifle innovation. Initially, AI developers in Japan had to rely on existing laws, such as those related to data protection, as guidelines. For instance, in 2018, Japanese lawmakers revised the country's Copyright Act, allowing for copyrighted content to be used for data analysis. Since then, lawmakers have clarified that the revision also applies to AI training data, paving the way for AI companies to train their algorithms on other companies' intellectual property. This approach reflects Japan's desire to foster a vibrant AI ecosystem while ensuring that AI is used responsibly and ethically.
     

This is just a snapshot of recent regulatory developments around AI. Countries around the world, from Brazil to Australia, are looking at their approaches to law in this area and The People Space will be keeping a close eye on this in its FutureofWork4HR platform. But as professor Acemoglu notes, the current development path is centred on automation and the control of information by a few large players. “I do not believe that we can escape this path unless voices from workers, civil society and the developing world are heard.” We agree and encourage HR leaders and organisations to get involved in helping to shape frameworks, both internally and externally. 

Published 22 November 2023
Enjoyed this story?

Sign up for our newsletter here.