On January 12, 2024, California state Assembly member Marc Berman introduced a bill that would impose criminal penalties for the creation, distribution, and possession of child sexual abuse material (CSAM) created using artificial intelligence (AI).  The bill would expand California’s definition of “obscene matter” to include “representations of real or fictitious persons generated through the use of artificially intelligent software or computer-generated means, who are, or who a reasonable person would regard as being, real persons under 18 years of age,” including those “engaging in or simulating sexual conduct.”  

In a press release, Assembly Member Berman stated he expects the bill to reach the Assembly floor this March.

You can find a summary of key themes in AI bills introduced by state legislatures in the past year in our blog post here.

On January 16, the attorneys general of 25 states – including California, Illinois, and Washington – and the District of Columbia filed reply comments to the Federal Communication Commission’s (FCC) November Notice of Inquiry on the implications of artificial intelligence (AI) technology for efforts to mitigate robocalls and robotexts. 

The Telephone Consumer Protection Act (TCPA) limits the conditions under which a person may lawfully make a telephone call using “an artificial or prerecorded voice.”  The reply comments call on the FCC to take the position that “any type of AI technology that generates a human voice should be considered an ‘artificial voice’ for purposes of the [TCPA].”  They further state that a more permissive approach would “act as a ‘stamp of approval’ for unscrupulous businesses seeking to employ AI technologies to inundate consumers with unwanted robocalls for which they did not provide consent[], all based on the argument that the business’s advanced AI technology acts as a functional equivalent of a live agent.”

On January 31, FCC Chairwoman Jessica Rosenworcel announced a proposal to “recognize calls made with AI-generated voices [as] ‘artificial’ voices under the [TCPA].”  The Chairwoman explained that the proposed approach would offer “State Attorneys General offices across the country new tools they can use to crack down on these scams and protect consumers.”

On January 30, 2024, the U.S. Office of Management and Budget (OMB) published a request for information (RFI) soliciting public input on how agencies can be more effective in their use of privacy impact assessments (PIAs) to mitigate privacy risks, including those “exacerbated by artificial intelligence (AI).”  The RFI notes that federal agencies may develop or procure AI-enabled systems from the private sector that are developed or tested using personal identifiable information (PII), or systems that process or use PII in their operation.  Among other things, the RFI seeks comment on the risks “specific to the training, evaluation, or use of AI and AI-enabled systems” that agencies should consider in conducting PIAs of those systems. 

Comments will be accepted through April 1, 2024.  The opportunity to comment may be of particular interest to companies that provide software or data processing services to the government, as revisions to PIA procedures could have implications for how federal agencies evaluate private-sector products and services for governmental use.

OMB published the RFI in accordance with its responsibilities under the Biden administration’s October 2023 Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.  For more information on the Executive Order, see our summary of the Order and overview of the implementation deadlines we expect in the first quarter of 2024.

On January 29, 2024, the Department of Commerce (“Department”) published a proposed rule (“Proposed Rule”) to require providers and foreign resellers of U.S. Infrastructure-as-a-Service (“IaaS”) products to (i) verify the identity of their foreign customers and (ii) notify the Department when a foreign person transacts with that provider or reseller to train a large artificial intelligence (“AI”) model with potential capabilities that could be used in malicious cyber-enabled activity. The proposed rule also contemplates that the Department may impose special measures to be undertaken by U.S. IaaS providers to deter foreign malicious cyber actors’ use of U.S. IaaS products.  The accompanying request for comments has a deadline of April 29, 2024.

Continue Reading Department of Commerce Issues Proposed Rule to Regulate Infrastructure-as-a-Service Providers and Resellers

U.S. policymakers have continued to express interest in legislation to regulate artificial intelligence (“AI”), particularly at the state level.  Although comprehensive AI bills and frameworks in Congress have received substantial attention, state legislatures also have been moving forward with their own efforts to regulate AI.  This blog post summarizes key themes in state AI bills introduced in the past year.  Now that new state legislative sessions have commenced, we expect to see even more activity in the months ahead.

Continue Reading Trends in AI:  U.S. State Legislative Developments

From February 17, 2024, the Digital Services Act (“DSA”) will apply to providers of intermediary services (e.g., cloud services, file-sharing services, search engines, social networks and online marketplaces). These entities will be required to comply with a number of obligations, including implementing notice-and-action mechanisms, complying with detailed rules on terms and conditions, and publishing transparency reports on content moderation practices, among others. For more information on the DSA, see our previous blog posts here and here.

As part of its powers conferred under the DSA, the European Commission is empowered to adopt delegated and implementing acts* on certain aspects of implementation and enforcement of the DSA. In 2023, the Commission adopted one delegated act on supervisory fees to be paid by very large online platforms and very large online search engines (“VLOPs” and “VLOSEs” respectively), and one implementing act on procedural matters relating to the Commission’s enforcement powers. The Commission has proposed several other delegated and implementing acts, which we set out below. The consultation period for these draft acts have now passed, and we anticipate that they will be adopted in the coming months.

Continue Reading Draft Delegated and Implementing Acts Pursuant to the Digital Services Act

On January 24, 2024, the U.S. National Science Foundation (“NSF”) announced the launch of the National Artificial Intelligence Research Resource (“NAIRR”) pilot, a two-year initiative to develop a shared national research infrastructure for responsible AI discovery and innovation. The launch makes progress on a goal in President Biden’s recent Executive Order on AI safety and security that directs the NSF to launch a NAIRR pilot within 90 days.

The NAIRR pilot will broadly support AI-related research with an initial focus on the application of AI to societal challenges, including human health and environment and infrastructure sustainability.  To support researchers and educators, the NAIRR pilot also will compile AI resources such as pre-trained models, responsible AI toolkits, and industry-specific training data sets that are aligned with the NAIRR pilot goals. The NSF will partner with 10 other federal agencies as well as 25 private sector, nonprofit, and philanthropic organizations to implement the NAIRR pilot and improve its ecosystem over time.

The NSF has stated that it welcomes additional partners and will release a broader call for proposals from the research community in spring 2024.

Opt-out collective actions (i.e. US-style class actions) can only be brought in the UK as competition law claims.  Periodic proposals  to legislate to expand this regime to consumer law claims have so far faltered.  However, this is now back on the Parliamentary agenda.  Several members of the House of Lords have indicated their support for expanding the regime to allow consumers and small businesses to bring opt-out collective actions for breaches of consumer law, and potentially on other bases.

If implemented, this expansion would be very significant and would allow for many new types of class actions in the UK.  Tech companies are already prime targets as defendants to competition-related opt-out class actions.  An expansion of the regime to allow actions for breaches of consumer law, as well as competition law, would only increase their exposure further.

As there is now limited time for legislation to be passed to effect such changes before the UK Parliament is dissolved in advance of an upcoming general election, this may be an issue for the next Parliament.  It will therefore be important to assess what the UK’s main parties say on this – and any manifesto commitments – in the run-up to the election.

Continue Reading UK Opt-Out Class Actions for Non-Competition Claims back on Parliamentary Agenda

On 15 January 2024, the UK’s Information Commissioner’s Office (“ICO”) announced the launch of a consultation series (“Consultation”) on how elements of data protection law apply to the development and use of generative AI (“GenAI”). For the purposes of the Consultation, GenAI refers to “AI models that can create new content e.g., text, computer code, audio, music, images, and videos”.

As part of the Consultation, the ICO will publish a series of chapters over the coming months outlining their thinking on how the UK GDPR and Part 2 of the Data Protection Act 2018 apply to the development and use of GenAI. The first chapter, published in tandem with the Consultation’s announcement, covers the lawful basis, under UK data protection law, for web scraping of personal data to train GenAI models. Interested stakeholders are invited to provide feedback to the ICO by 1 March 2024.

Continue Reading ICO Launches Consultation Series on Generative AI

On January 9, the FTC published a blog post discussing privacy and confidentiality obligations for companies that provide artificial intelligence (“AI”) services.  The FTC described “model-as-a-service” companies as those that develop, host, and provide pre-trained AI models to users and businesses through end-user interfaces or application programming interfaces (“APIs”).  According to the FTC, when model-as-a-service companies misrepresent how customer data is used, omit material facts related to the use and collection of customer data, or adopt data practices that harm competition, they may be exposed to enforcement action. 

  • Misrepresentation of Data Practices.  The FTC stated that model-as-a-service companies have an obligation to abide by their privacy commitments to users and customers, including promises not to use customer data for training or updating AI models and regardless of how or where commitments are made. 
  • Failure to Disclose Data Practices.  The FTC added that model-as-a-service companies may not omit material facts that would affect a customer’s decision to purchase their services, including the company’s collection and use of customer data. 
  • Harms Competition from Data Practices.  Finally, the FTC stated that misrepresentations, material omissions, and misuse related to a model-as-a-service company’s data practices could undermine competition.    

The FTC’s blog post is not legally binding but is another example of how the FTC is trying to influence the AI industry and re-iterates that AI is an enforcement priority for the agency.