Artificial Intelligence (AI)

Recently, a bipartisan group of U.S. senators introduced new legislation to address transparency and accountability for artificial intelligence (AI) systems, including those deployed for certain “critical impact” use cases. While many other targeted, bipartisan AI bills have been introduced in both chambers of Congress, this bill appears to be one of the first to propose specific legislative text for broadly regulating AI testing and use across industries.

Continue Reading Bipartisan group of Senators introduce new AI transparency legislation

On October 30, 2023, days ahead of government leaders convening in the UK for an international AI Safety Summit, the White House issued an Executive Order (“EO”) outlining an expansive strategy to support the development and deployment of safe and secure AI technologies (for further details on the EO, see our blog here). As readers will be aware, the European Commission released its proposed Regulation Laying Down Harmonized Rules on Artificial Intelligence (the EU “AI Act”) in 2021 (see our blog here). EU lawmakers are currently negotiating changes to the Commission text, with hopes of finalizing the text by the end of this year, although many of its obligations would only begin to apply to regulated entities in 2026 or later.

The EO and the AI Act stand as two important developments shaping the future of global AI governance and regulation. This blog post discusses key similarities and differences between the two.

Continue Reading From Washington to Brussels: A Comparative Look at the Biden Administration’s Executive Order and the EU’s AI Act

On 13 October 2023, members of the G7 released a set of draft guiding principles (“Principles”) for organisations developing advanced AI systems, including generative AI and foundational models.

In parallel, the European Commission launched a stakeholder survey (“Survey”) on the Principles, inviting any interested parties to comment by 20 October 2023.  After the Survey is complete, G7 members intend to compile a voluntary code of conduct that will provide guidance for AI developers.  The Principles and voluntary code of conduct will complement the legally binding rules that EU co-legislators are currently finalizing under the EU AI Act (for further details on the AI Act, see our blog post here).

The Principles build on the existing OECD AI principles published in May 2019 (see our blog post here) in response to recent developments in advanced AI systems.  They would apply to all participants in the AI value chain, including those responsible for the design, development, deployment, and use of AI systems.

Continue Reading G7 Countries Publish Draft Guiding Principles for Advanced AI Development

The field of artificial intelligence (“AI”) is at a tipping point. Governments and industries are under increasing pressure to forecast and guide the evolution of a technology that promises to transform our economies and societies. In this series, our lawyers and advisors provide an overview of the policy approaches and regulatory frameworks for AI in jurisdictions around the world. Given the rapid pace of technological and policy developments in this area, the articles in this series should be viewed as snapshots in time, reflecting the current policy environment and priorities in each jurisdiction.

The following article examines the state of play in AI policy and regulation in the United States. The previous article in this series covered the European Union.

Continue Reading Spotlight Series on Global AI Policy — Part II: U.S. Legislative and Regulatory Developments

On August 22, 2023, the Spanish Council of Ministers approved the Statute of the Spanish Agency for the Supervision of Artificial Intelligence (“AESIA”) thus creating the first AI regulatory body in the EU. The AESIA will start operating from December 2023, in anticipation of the upcoming EU AI Act  (for a summary of the AI Act, see our EMEA Tech Regulation Toolkit). In line with its National Artificial Intelligence Strategy, Spain has been playing an active role in the development of AI initiatives, including a pilot for the EU’s first AI Regulatory Sandbox and guidelines on AI transparency.

Continue Reading Spain Creates AI Regulator to Enforce the AI Act

On 9 October 2023, the European Parliament’s Internal Market and Consumer Protection Committee (IMCO) and Committee on Legal Affairs (JURI) agreed revised wording to amend the European Commission’s (the “EC”) proposed new Product Liability Directive (the “Directive”). The vote was passed with 33 votes in favour to 2 against. If adopted, the Directive will replace the existing (almost 40-year old) Directive 85/374/EEC on Liability for Defective Products, which imposes a form of strict liability on product manufacturers for harm caused by their defective products.

Continue Reading EU Legislative Update on the New Product Liability Directive

On 31 August 2023, the UK’s House of Commons Innovation and Technology Committee (“Committee”) published an interim report (“Report”) evaluating the UK Government’s AI governance proposals and examining different approaches to the regulation of AI systems. As readers of this blog will be aware, in March 2023, the UK Government published a White Paper setting out its “pro-innovation approach to AI regulation” which will require existing regulators to take responsibility for promoting and overseeing responsible AI within their sectors (for further information on the UK Government’s strategy, see our blog post here).

The Report recommends that the UK Government introduce a “tightly-focused AI Bill” in the next parliamentary session to “position the UK as an AI governance leader”.

Continue Reading UK Parliament Publishes Interim Report on the UK’s AI Governance Proposals

On July 7, 2023, the UK House of Lords’ Communications and Digital Committee (the “Committee”) announced an inquiry into Large Language Models (“LLMs”), a type of generative AI used for a wide range of purposes, including producing text, code and translations.  According to the Committee, they have launched the inquiry to understand “what needs to happen over the next 1–3 years to ensure the UK can respond to the opportunities and risks posed by large language models.

This inquiry is the first UK Parliament initiative to evaluate the UK Government’s “pro-innovation” approach to AI regulation, which empowers regulators to oversee AI within their respective sectors (as discussed in our blog here).  UK regulators have begun implementing the approach already.  For, example, the Information Commissioner’s Office has recently issued guidance on AI and data protection and generative AI tools that process personal data (see our blogs here and here for more details). 

Continue Reading UK House of Lords Announces Inquiry into Large Language Models

On 21 June 2023, at the close of a roundtable meeting of the G7 Data Protection and Privacy Authorities, regulators from the United States, France, Germany, Italy, United Kingdom, Canada and Japan published a joint “Statement on Generative AI” (“Statement”) (available here). In the Statement, regulators identify a range of data protection-related concerns they believe are raised by generative AI tools, including legal authority for processing personal information, and transparency, explainability, and security. The group of regulators also call on companies to “embed privacy in the design conception, operation, and management” of generative AI tools.

In advance of the G7 meeting, on 15 June 2023, the UK Information Commissioner’s Office (“ICO”) separately announced that it will be “checking” whether businesses have addressed privacy risks before deploying generative AI, and “taking action where there is risk of harm to people through poor use of their data”.

Continue Reading UK and G7 Privacy Authorities Warn of Privacy Risks Raised by Generative AI

On 31 May 2023, at the close of the fourth meeting of the US-EU Trade & Tech Council (“TTC”), Margrethe Vestager – the European Union’s Executive Vice President, responsible for competition and digital strategy – announced that the EU and US are working together to develop a voluntary AI Code of Conduct in advance of formal regulation taking effect. The goal, according to Vestager, is to develop non-binding international standards on risk audits, transparency and other requirements for companies developing AI systems. The AI Code of Conduct, once finalized, would be put before G7 leaders as a joint transatlantic proposal, and companies would be encouraged to voluntarily sign up.

Continue Reading EU and US Lawmakers Agree to Draft AI Code of Conduct