Artificial Intelligence (AI)

On November 16, 2023, the Federal Trade Commission (the “FTC”) announced a competition seeking solutions to protect consumers from voice cloning technology harms. Voice cloning technology can create a nearly identical clone of someone’s voice based on a short audio clip and is becoming more sophisticated as text-to-speech AI advances. The FTC cited concerns about

Recently, a bipartisan group of U.S. senators introduced new legislation to address transparency and accountability for artificial intelligence (AI) systems, including those deployed for certain “critical impact” use cases. While many other targeted, bipartisan AI bills have been introduced in both chambers of Congress, this bill appears to be one of the first to propose specific legislative text for broadly regulating AI testing and use across industries.

Continue Reading Bipartisan group of Senators introduce new AI transparency legislation

On October 30, 2023, days ahead of government leaders convening in the UK for an international AI Safety Summit, the White House issued an Executive Order (“EO”) outlining an expansive strategy to support the development and deployment of safe and secure AI technologies (for further details on the EO, see our blog here). As readers will be aware, the European Commission released its proposed Regulation Laying Down Harmonized Rules on Artificial Intelligence (the EU “AI Act”) in 2021 (see our blog here). EU lawmakers are currently negotiating changes to the Commission text, with hopes of finalizing the text by the end of this year, although many of its obligations would only begin to apply to regulated entities in 2026 or later.

The EO and the AI Act stand as two important developments shaping the future of global AI governance and regulation. This blog post discusses key similarities and differences between the two.

Continue Reading From Washington to Brussels: A Comparative Look at the Biden Administration’s Executive Order and the EU’s AI Act

The District Court for the Northern District of California recently granted, in substantial part, separate motions to dismiss a complaint challenging three defendants’ creation or use of Stable Diffusion, a generative artificial intelligence (“AI”) application used to generate images based on user-supplied instructions.

Continue Reading Motion to Dismiss Granted in Case About The Intersection of Copyright Law and Generative AI

Earlier today, the White House issued a Fact Sheet summarizing its Executive Order on a comprehensive strategy to support the development of safe and secure artificial intelligence (“AI”).  The Executive Order follows a number of actions by the Biden Administration on AI, including its Blueprint for an AI Bill of Rights and voluntary commitments from certain developers of AI systems.  According to the Administration, the Executive Order establishes new AI safety and security standards, protects privacy, advances equity and civil rights, protects workers, consumers, and patients, promotes innovation and competition, and advances American leadership.  This blog post summarizes these key components.

Continue Reading Biden Administration Announces Artificial Intelligence Executive Order

On 13 October 2023, members of the G7 released a set of draft guiding principles (“Principles”) for organisations developing advanced AI systems, including generative AI and foundational models.

In parallel, the European Commission launched a stakeholder survey (“Survey”) on the Principles, inviting any interested parties to comment by 20 October 2023.  After the Survey is complete, G7 members intend to compile a voluntary code of conduct that will provide guidance for AI developers.  The Principles and voluntary code of conduct will complement the legally binding rules that EU co-legislators are currently finalizing under the EU AI Act (for further details on the AI Act, see our blog post here).

The Principles build on the existing OECD AI principles published in May 2019 (see our blog post here) in response to recent developments in advanced AI systems.  They would apply to all participants in the AI value chain, including those responsible for the design, development, deployment, and use of AI systems.

Continue Reading G7 Countries Publish Draft Guiding Principles for Advanced AI Development

Yesterday, FCC Chairwoman Jessica Rosenworcel announced that she will be circulating for consideration to her fellow commissioners a proposed Notice of Inquiry (“NOI”) that will seek to develop a public record on how artificial intelligence capabilities may be affecting the proliferation of illegal robocalls and texts, and what tools may be available to address this challenge.  The FCC’s commissioners are expected to consider whether to adopt this NOI at the agency’s next open meeting on November 15, 2023.

Continue Reading FCC to Consider Inquiry into AI’s Effects on Illegal Robocalls and Texts

The field of artificial intelligence (“AI”) is at a tipping point. Governments and industries are under increasing pressure to forecast and guide the evolution of a technology that promises to transform our economies and societies. In this series, our lawyers and advisors provide an overview of the policy approaches and regulatory frameworks for AI in jurisdictions around the world. Given the rapid pace of technological and policy developments in this area, the articles in this series should be viewed as snapshots in time, reflecting the current policy environment and priorities in each jurisdiction.

The following article examines the state of play in AI policy and regulation in the United States. The previous article in this series covered the European Union.

Continue Reading Spotlight Series on Global AI Policy — Part II: U.S. Legislative and Regulatory Developments

On August 22, 2023, the Spanish Council of Ministers approved the Statute of the Spanish Agency for the Supervision of Artificial Intelligence (“AESIA”) thus creating the first AI regulatory body in the EU. The AESIA will start operating from December 2023, in anticipation of the upcoming EU AI Act  (for a summary of the AI Act, see our EMEA Tech Regulation Toolkit). In line with its National Artificial Intelligence Strategy, Spain has been playing an active role in the development of AI initiatives, including a pilot for the EU’s first AI Regulatory Sandbox and guidelines on AI transparency.

Continue Reading Spain Creates AI Regulator to Enforce the AI Act

On 9 October 2023, the European Parliament’s Internal Market and Consumer Protection Committee (IMCO) and Committee on Legal Affairs (JURI) agreed revised wording to amend the European Commission’s (the “EC”) proposed new Product Liability Directive (the “Directive”). The vote was passed with 33 votes in favour to 2 against. If adopted, the Directive will replace the existing (almost 40-year old) Directive 85/374/EEC on Liability for Defective Products, which imposes a form of strict liability on product manufacturers for harm caused by their defective products.

Continue Reading EU Legislative Update on the New Product Liability Directive