Last month, DeepSeek, an AI start-up based in China, grabbed headlines with claims that its latest large language AI model, DeepSeek-R1, could perform on par with more expensive and market-leading AI models despite allegedly requiring less than $6 million dollars’ worth of computing power from older and less-powerful chips.  Although some industry observers have raised doubts about the validity of DeepSeek’s claims, its AI model and AI-powered application piqued the curiosity of many, leading the DeepSeek application to become the most downloaded in the United States in late January.  DeepSeek was founded in July 2023 and is owned by High-Flyer, a hedge fund based in Hangzhou, Zhejiang.

The explosive popularity of DeepSeek coupled with its Chinese ownership has unsurprisingly raised data security concerns from U.S. Federal and State officials.  These concerns echo many of the same considerations that led to a FAR rule that prohibits telecommunications equipment and services from Huawei and certain other Chinese manufacturers.  What is remarkable here is the pace at which officials at different levels of government—including the White House, Congress, federal agencies, and state governments, have taken action in response to DeepSeek and its perceived risks to national security.  Continue Reading U.S. Federal and State Governments Moving Quickly to Restrict Use of DeepSeek

On November 6, 2024, the UK Information Commissioner’s Office (ICO) released its AI Tools in recruitment audit outcomes report (“Report”). This Report documents the ICO’s findings from a series of consensual audit engagements conducted with AI tool developers and providers. The goal of this process was to assess compliance with data protection law, identify any risks or room for improvement, and provide recommendations for AI providers and recruiters. The audits ran across sourcing, screening, and selection processes in recruitment, but did not include AI tools used to process biometric data, or generative AI. This work follows the publication of the Responsible AI in Recruitment guide by the Department for Science, Innovation, and Technology (DSIT) in March 2024.Continue Reading ICO Audit on AI Recruitment Tools

On July 29, 2024, the American Bar Association (“ABA”) Standing Committee on Ethics and Professional Responsibility released its first opinion regarding attorneys’ use of generative artificial intelligence (“GenAI”).  The opinion, Formal Opinion 512 on Generative Artificial Intelligence Tools (the “Opinion”), generally confirms what many have assumed: GenAI can be a valuable tool to enhance efficiency in the practice of law, but attorneys utilizing GenAI must be cognizant of the effect that the tool has on their ethical obligations, including their duties to provide competent legal representation and to protect client information.Continue Reading ABA Publishes First Opinion on the Use of Generative AI in the Legal Profession

This quarterly update highlights key legislative, regulatory, and litigation developments in the second quarter of 2024 related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), and data privacy and cybersecurity. 

I.       Artificial Intelligence

Federal Legislative Developments

  • Impact Assessments: The American Privacy Rights Act of 2024 (H.R. 8818, hereinafter “APRA”) was formally introduced in the House by Representative Cathy McMorris Rodgers (R-WA) on June 25, 2024.  Notably, while previous drafts of the APRA, including the May 21 revised draft, would have required algorithm impact assessments, the introduced version no longer has the “Civil Rights and Algorithms” section that contained these requirements.
  • Disclosures: In April, Representative Adam Schiff (D-CA) introduced the Generative AI Copyright Disclosure Act of 2024 (H.R. 7913).  The Act would require persons that create a training dataset that is used to build a generative AI system to provide notice to the Register of Copyrights containing a “sufficiently detailed summary” of any copyrighted works used in the training dataset and the URL for such training dataset, if the dataset is publicly available.  The Act would require the Register to issue regulations to implement the notice requirements and to maintain a publicly available online database that contains each notice filed.
  • Public Awareness and Toolkits: Certain legislative proposals focused on increasing public awareness of AI and its benefits and risks.  For example, Senator Todd Young (R-IN) introduced the Artificial Intelligence Public Awareness and Education Campaign Act (S. 4596), which would require the Secretary of Commerce, in coordination with other agencies, to carry out a public awareness campaign that provides information regarding the benefits and risks of AI in the daily lives of individuals.  Senator Edward Markey (D-MA) introduced the Social Media and AI Resiliency Toolkits in Schools Act (S. 4614), which would require the Department of Education and the federal Department of Health and Human Services to develop toolkits to inform students, educators, parents, and others on how AI and social media may impact student mental health.

Continue Reading U.S. Tech Legislative, Regulatory & Litigation Update – Second Quarter 2024

On May 20, 2024, a proposal for a law on artificial intelligence (“AI”) was laid before the Italian Senate.

The proposed law sets out (1) general principles for the development and use of AI systems and models; (2) sectorial provisions, particularly in the healthcare sector and for scientific research for healthcare; (3) rules on the national strategy on AI and governance, including designating the national competent authorities in accordance with the EU AI Act; and (4) amendments to copyright law. 

We provide below an overview of the proposal’s key provisions.Continue Reading Italy Proposes New Artificial Intelligence Law

With the rapid evolution of artificial intelligence (AI) technology, the regulatory frameworks for AI in the Asia–Pacific (APAC) region continue to develop quickly. Policymakers and regulators have been prompted to consider either reviewing existing regulatory frameworks to ensure their effectiveness in addressing emerging risks brought by AI, or proposing new, AI-specific rules or regulations. Overall, there appears to be a trend across the region to promote AI uses and developments, with most jurisdictions focusing on high-level and principle-based guidance. While a few jurisdictions are considering regulations specific to AI, they are still at an early stage. Further, privacy regulators and some industry regulators, such as financial regulators, are starting to play a role in AI governance.

This blog post provides an overview of various approaches in regulating AI and managing AI-related risks in the APAC region.  Continue Reading Overview of AI Regulatory Landscape in APAC

This quarterly update highlights key legislative, regulatory, and litigation developments in the first quarter of 2024 related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), and data privacy and cybersecurity.  As noted below, some of these developments provide industry with the opportunity for participation and comment.Continue Reading U.S. Tech Legislative, Regulatory & Litigation Update – First Quarter 2024

The field of artificial intelligence (“AI”) is at a tipping point. Governments and industries are under increasing pressure to forecast and guide the evolution of a technology that promises to transform our economies and societies. In this series, our lawyers and advisors provide an overview of the policy approaches and regulatory frameworks for AI in jurisdictions around the world. Given the rapid pace of technological and policy developments in this area, the articles in this series should be viewed as snapshots in time, reflecting the current policy environment and priorities in each jurisdiction.

The following article examines the state of play in AI policy and regulation in China. The previous articles in this series covered the European Union and the United States.Continue Reading Spotlight Series on Global AI Policy — Part III: China’s Policy Approach to Artificial Intelligence

Earlier today, the White House issued a Fact Sheet summarizing its Executive Order on a comprehensive strategy to support the development of safe and secure artificial intelligence (“AI”).  The Executive Order follows a number of actions by the Biden Administration on AI, including its Blueprint for an AI Bill of Rights and voluntary commitments from certain developers of AI systems.  According to the Administration, the Executive Order establishes new AI safety and security standards, protects privacy, advances equity and civil rights, protects workers, consumers, and patients, promotes innovation and competition, and advances American leadership.  This blog post summarizes these key components.Continue Reading Biden Administration Announces Artificial Intelligence Executive Order