Technology

On October 30, 2023, days ahead of government leaders convening in the UK for an international AI Safety Summit, the White House issued an Executive Order (“EO”) outlining an expansive strategy to support the development and deployment of safe and secure AI technologies (for further details on the EO, see our blog here). As readers will be aware, the European Commission released its proposed Regulation Laying Down Harmonized Rules on Artificial Intelligence (the EU “AI Act”) in 2021 (see our blog here). EU lawmakers are currently negotiating changes to the Commission text, with hopes of finalizing the text by the end of this year, although many of its obligations would only begin to apply to regulated entities in 2026 or later.

The EO and the AI Act stand as two important developments shaping the future of global AI governance and regulation. This blog post discusses key similarities and differences between the two.Continue Reading From Washington to Brussels: A Comparative Look at the Biden Administration’s Executive Order and the EU’s AI Act

On 13 October 2023, members of the G7 released a set of draft guiding principles (“Principles”) for organisations developing advanced AI systems, including generative AI and foundational models.

In parallel, the European Commission launched a stakeholder survey (“Survey”) on the Principles, inviting any interested parties to comment by 20 October 2023.  After the Survey is complete, G7 members intend to compile a voluntary code of conduct that will provide guidance for AI developers.  The Principles and voluntary code of conduct will complement the legally binding rules that EU co-legislators are currently finalizing under the EU AI Act (for further details on the AI Act, see our blog post here).

The Principles build on the existing OECD AI principles published in May 2019 (see our blog post here) in response to recent developments in advanced AI systems.  They would apply to all participants in the AI value chain, including those responsible for the design, development, deployment, and use of AI systems.Continue Reading G7 Countries Publish Draft Guiding Principles for Advanced AI Development

The field of artificial intelligence (“AI”) is at a tipping point. Governments and industries are under increasing pressure to forecast and guide the evolution of a technology that promises to transform our economies and societies. In this series, our lawyers and advisors provide an overview of the policy approaches and regulatory frameworks for AI in jurisdictions around the world. Given the rapid pace of technological and policy developments in this area, the articles in this series should be viewed as snapshots in time, reflecting the current policy environment and priorities in each jurisdiction.

The following article examines the state of play in AI policy and regulation in the United States. The previous article in this series covered the European Union.Continue Reading Spotlight Series on Global AI Policy — Part II: U.S. Legislative and Regulatory Developments

On 9 October 2023, the European Parliament’s Internal Market and Consumer Protection Committee (IMCO) and Committee on Legal Affairs (JURI) agreed revised wording to amend the European Commission’s (the “EC”) proposed new Product Liability Directive (the “Directive”). The vote was passed with 33 votes in favour to 2 against. If adopted, the Directive will replace the existing (almost 40-year old) Directive 85/374/EEC on Liability for Defective Products, which imposes a form of strict liability on product manufacturers for harm caused by their defective products.Continue Reading EU Legislative Update on the New Product Liability Directive

This quarterly update summarizes key legislative and regulatory developments in the third quarter of 2023 related to key technologies and related topics, including Artificial Intelligence (“AI”), connected and automated vehicles (“CAVs”), and data privacy and cybersecurity.Continue Reading U.S. Tech Legislative & Regulatory Update – Third Quarter 2023

The field of artificial intelligence (“AI”) is at a tipping point. Governments and industries are under increasing pressure to forecast and guide the evolution of a technology that promises to transform our economies and societies. In this series, our lawyers and advisors provide an overview of the policy approaches and regulatory frameworks for AI in jurisdictions around the world. Given the rapid pace of technological and policy developments in this area, the articles in this series should be viewed as snapshots in time, reflecting the current policy environment and priorities in each jurisdiction.

We start this series with a look at how the European Union is approaching the governance of AI.

Continue Reading Spotlight Series on Global AI Policy — Part I: European Union

On September 8, 2023, Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO), Chair and Ranking Member of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, announced a new bipartisan framework for artificial intelligence (“AI”) legislation.  Senator Blumenthal said, “This bipartisan framework is a milestone – the first tough, comprehensive legislative blueprint for real, enforceable AI protections. It should put us on a path to addressing the promise and peril AI portends.” He also told CTInsider that he hopes to have a “detailed legislative proposal” ready for Congress by the end of this year.Continue Reading Senators Release Bipartisan Framework for AI Legislation

On September 6, 2023, U.S. Senator Bill Cassidy, ranking member of the Senate Health, Education, Labor and Pensions (HELP) Committee, published a white paper addressing artificial intelligence (AI) and its potential benefits and risks in the workplace, as well as in the health care  context, which we discuss here.

The whitepaper notes that employers

Continue Reading Senate Whitepaper Addresses AI in the Workplace

On August 25, 2023, China’s National Information Security Standardization Technical Committee (“TC260”) released the final version of the Practical Guidelines for Cybersecurity Standards – Method for Tagging Content in Generative Artificial Intelligence Services (《网络安全标准实践指南——生成式人工智能服务内容标识方法》) (“Tagging Standard”) (Chinese version available here), following a draft version circulated earlier this month.Continue Reading Labeling of AI Generated Content: New Guidelines Released in China

Updated August 8, 2023.  Originally posted May 1, 2023.

Last week, comment deadlines were announced for a Federal Communications Commission (“FCC”) Order and Notice of Proposed Rulemaking (“NPRM”) that could have significant compliance implications for all holders of international Section 214 authority (i.e., authorization to provide telecommunications services from points in the U.S. to points abroad).  The rule changes on which the FCC seeks comment are far-reaching and, if adopted as written, could result in significant future compliance burdens, both for entities holding international Section 214 authority, as well as the parties holding ownership interests in these entities.  Comments on these rule changes are due Thursday, August 31, with reply comments due October 2.Continue Reading Comments Due August 31 on FCC’s Proposal to Step Up Review of Foreign Ownership in Telecom Carriers and Establish Cybersecurity Requirements