Artificial Intelligence (AI)

On June 22, Texas Governor Greg Abbott (R) signed the Texas Responsible AI Governance Act (“TRAIGA”) (HB 149) into law.  The law, which takes effect on January 1, 2026, makes Texas the second state to enact comprehensive AI consumer protection legislation, following the 2024 enactment of the Colorado AI Act.  Unlike the

Continue Reading Texas Enacts AI Consumer Protection Law

On June 17, the Joint California Policy Working Group on AI Frontier Models (“Working Group”) issued its final report on frontier AI policy, following public feedback on the draft version of the report released in March.  The report describes “frontier models” as the “most capable” subset of foundation models, or a class of general-purpose technologies

Continue Reading California Frontier AI Working Group Issues Final Report on Frontier Model Regulation

On June 12, the New York legislature passed the Responsible AI Safety & Education (“RAISE”) Act (S 6953), a frontier model public safety bill that would establish safeguard, reporting, disclosure, and other requirements for large developers of frontier AI models.  If signed into law by Governor Kathy Hochul (D), the RAISE Act would

Continue Reading New York Legislature Passes Sweeping AI Safety Legislation

The European Commission has opened a consultation to gather feedback on forthcoming guidelines “on implementing the AI Act’s rules on high-risk AI systems”.  (For more on the definition of a high-risk AI system, see our blog post here.)  The consultation is open until July 18,  2025, following which the Commission will publish a summary of the consultation results through the AI Office.

For context, the AI Act contemplates two categories of “high-risk” AI systems:

  1. Products—or safety components of products—covered by the EU product safety legislation identified in Annex I, where the product or safety component is subject to a third-party conformity assessment (Art. 6(1)); and
  2. Certain systems that fall within eight categories of use cases identified in Annex III, namely, (1) biometrics; (2) critical infrastructure; (3) education and vocational training; (4) employment, workers’ management and access to self-employment; (5) access to and enjoyment of essential private services and essential public services and benefits; (6) law enforcement; (7) migration, asylum and border control management; and (8) administration of justice and democratic processes (Art. 6(2)). Only certain use cases within each category are considered high-risk—not the entire category itself. In addition, with one exception, the AI systems must be “intended to be used” for the particular use case, e.g., “AI systems intended to be used for emotion recognition”—a use case within biometrics (category one) (id., emphasis added).

Continue Reading The European Commission opens public consultation on high-risk AI systems

In a surprise move, Senate Parliamentarian Elizabeth MacDonough ruled that a proposed moratorium on state and local AI laws satisfies the Byrd Rule, the requirement that reconciliation bills contain only budgetary provisions and omit “extraneous” policy language.  While MacDonough’s determination allows the Senate Commerce Committee’s version of the moratorium to remain in the bill, its

Continue Reading Senate Parliamentarian Clears Revised State AI Enforcement Moratorium for Reconciliation Bill, But Passage Remains in Doubt

This year, state lawmakers have introduced over a dozen bills to regulate “surveillance,” “personalized,” or “dynamic” pricing.  Although many of these proposals have failed as 2025 state legislative sessions come to a close, lawmakers in New York, California, and a handful of other states are moving forward with a range of different approaches.  These proposals

Continue Reading State Legislatures Advance Surveillance Pricing Regulations

On June 3, 2025, the OECD introduced a new framework called AI Capability Indicators that compares AI capabilities to human abilities. The framework is intended to help policymakers assess the progress of AI systems and enable informed policy responses to new AI advancements. The indicators are designed to help non-technical policymakers understand the degree of advancement of different AI capabilities. AI researchers, policymakers, and other stakeholder groups, including economists, psychologists, and education specialists, are invited to submit their feedback to the current beta-framework.Continue Reading OECD Introduces AI Capability Indicators for Policymakers

Last month, a Georgia state court granted OpenAI’s motion for summary judgment, dismissing a defamation suit brought by a nationally syndicated radio show host.

In the suit, Mark Walters v. OpenAI LLC, 23-A-04860-2 (Sup. Ct. Gwinnett Cty, GA), the plaintiff alleged that that the ChatGPT tool, developed by OpenAI, defamed him when it presented

Continue Reading Georgia Court Dismisses Defamation Suit Against AI Developer OpenAI

EU lawmakers are reportedly considering a delay in the enforcement of certain provisions of the EU Artificial Intelligence Act (AI Act). While the AI Act formally entered into force on 1 August 2024, its obligations apply on a rolling basis. Requirements related to AI literacy and the prohibition of specific AI practices have been applicable since 2 February 2025. Additional obligations are scheduled to come into effect on 2 August 2025 (general-purpose AI (GPAI) model obligations), 2 August 2026 (transparency obligations and obligations on Annex III high-risk AI systems), and 2 August 2027 (obligations on Annex I high-risk AI systems). The timeline and certainty of regulatory enforcement of these future obligations now appears uncertain.Continue Reading European Commission hints at delaying the AI Act

In a new post on the Inside Privacy blog, our colleagues discuss key consumer protection considerations for companies deploying AI chatbots in the EU market.

Continue Reading Digital Fairness Act Series: Topic 2 – Transparency and Disclosure Obligations for AI Chatbots in Consumer Interactions