Artificial Intelligence (AI)

On July 18, 2024, the President of the European Commission, Ursula von der Leyen, was reconfirmed by the European Parliament for a second five-year term. As part of the process, she delivered a speech before the Parliament, complemented by a 30-page program, which outlines the Commission’s political guidelines and priorities for the next five years. The guidelines introduce a series of forthcoming legislative proposals across many policy areas, including on defence and technology security.Continue Reading The Future of EU Defence Policy and a Renewed Focus on Technology Security

On July 30, 2024, the European Commission announced the launch of a consultation on trustworthy general-purpose artificial intelligence (“GPAI”) models and an invitation to stakeholders to express their interest in participating in the drawing up of the first GPAI Code of Practice (the “Code”) under the newly passed EU AI Act (see our previous blog here). Once finalized, GPAI model providers will be able to voluntarily rely on the Code to demonstrate their compliance with certain obligations in the AI Act.Continue Reading European Commission Launches Consultation and Call for Expression of Interest on GPAI Code of Practice

With Congress in summer recess and state legislative sessions waning, the Biden Administration continues to implement its October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (“EO”).  On July 26, the White House announced a series of federal agency actions under the EO for managing AI safety and security risks, hiring AI talent in the government workforce, promoting AI innovation, and advancing US global AI leadership.  On the same day, the Department of Commerce released new guidance on AI red-team testing, secure AI software development, generative AI risk management, and a plan for promoting and developing global AI standards.  These announcements—which the White House emphasized were on time within the 270-day deadline set by the EO—mark the latest in a series of federal agency activities to implement the EO.Continue Reading Federal Agencies Continue Implementation of AI Executive Order

On Wednesday, August 7, the Federal Communications Commission (FCC) approved a Notice of Proposed Rulemaking (NPRM) that would amend its rules under the Telephone Consumer Protection Act (TCPA) to incorporate new consent and disclosure requirements for the transmission of AI-generated calls and texts. The NPRM builds off the FCC’s recent Notice of Inquiry (NOI) on the effect of AI on illegal robocalls and texts, which we previously discussed here.

The NPRM seeks comment on new rules that would require a sender to clearly and conspicuously specify in its consent form that the consent extends to AI-generated calls and texts and secure the consumer’s consent for such calls and texts before they could be transmitted. The proposal also would require a sender of AI-generated content to, at the beginning of the call or text, clearly disclose to the called party that AI-generated technology is being used.Continue Reading FCC Proposes New Consent and Disclosure Rules for AI-Generated Calls and Texts

This is part of an ongoing series of Covington blogs on implementation of Executive Order 14028, “Improving the Nation’s Cybersecurity,” issued by President Biden on May 12, 2021 (the “Cyber EO”).  The first blog summarized the Cyber EO’s key provisions and timelines, and subsequent blogs described the actions taken by various government agencies to implement the Cyber EO from June 2021 through June 2024.  This blog describes key actions taken to implement the Cyber EO, as well as the U.S. National Cybersecurity Strategy, during July 2024.  It also describes key actions taken during July 2024 to implement President Biden’s Executive Order on Artificial Intelligence (the “AI EO”), particularly its provisions that impact cybersecurity, national security, and software supply chain security.Continue Reading July 2024 Developments Under President Biden’s Cybersecurity Executive Order, National Cybersecurity Strategy, and AI Executive Order

On July 29, 2024, the American Bar Association (“ABA”) Standing Committee on Ethics and Professional Responsibility released its first opinion regarding attorneys’ use of generative artificial intelligence (“GenAI”).  The opinion, Formal Opinion 512 on Generative Artificial Intelligence Tools (the “Opinion”), generally confirms what many have assumed: GenAI can be a valuable tool to enhance efficiency in the practice of law, but attorneys utilizing GenAI must be cognizant of the effect that the tool has on their ethical obligations, including their duties to provide competent legal representation and to protect client information.Continue Reading ABA Publishes First Opinion on the Use of Generative AI in the Legal Profession

This quarterly update highlights key legislative, regulatory, and litigation developments in the second quarter of 2024 related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), and data privacy and cybersecurity. 

I.       Artificial Intelligence

Federal Legislative Developments

  • Impact Assessments: The American Privacy Rights Act of 2024 (H.R. 8818, hereinafter “APRA”) was formally introduced in the House by Representative Cathy McMorris Rodgers (R-WA) on June 25, 2024.  Notably, while previous drafts of the APRA, including the May 21 revised draft, would have required algorithm impact assessments, the introduced version no longer has the “Civil Rights and Algorithms” section that contained these requirements.
  • Disclosures: In April, Representative Adam Schiff (D-CA) introduced the Generative AI Copyright Disclosure Act of 2024 (H.R. 7913).  The Act would require persons that create a training dataset that is used to build a generative AI system to provide notice to the Register of Copyrights containing a “sufficiently detailed summary” of any copyrighted works used in the training dataset and the URL for such training dataset, if the dataset is publicly available.  The Act would require the Register to issue regulations to implement the notice requirements and to maintain a publicly available online database that contains each notice filed.
  • Public Awareness and Toolkits: Certain legislative proposals focused on increasing public awareness of AI and its benefits and risks.  For example, Senator Todd Young (R-IN) introduced the Artificial Intelligence Public Awareness and Education Campaign Act (S. 4596), which would require the Secretary of Commerce, in coordination with other agencies, to carry out a public awareness campaign that provides information regarding the benefits and risks of AI in the daily lives of individuals.  Senator Edward Markey (D-MA) introduced the Social Media and AI Resiliency Toolkits in Schools Act (S. 4614), which would require the Department of Education and the federal Department of Health and Human Services to develop toolkits to inform students, educators, parents, and others on how AI and social media may impact student mental health.

Continue Reading U.S. Tech Legislative, Regulatory & Litigation Update – Second Quarter 2024

On Thursday, July 25, the Federal Communications Commission (FCC) released a Notice of Proposed Rulemaking (NPRM) proposing new requirements for radio and television broadcasters and certain other licensees that air political ads containing content created using artificial intelligence (AI).  The NPRM was approved on a 3-2 party-line vote and comes in the wake of an announcement made by FCC Chairwoman Jessica Rosenworcel earlier this summer about the need for such requirements, which we discussed here

At the core of the NPRM are two proposed requirements.  First, parties subject to the rules would have to announce on-air that a political ad (whether a candidate-sponsored ad or an “issue ad” purchased by a political action committee) was created using AI.  Second, those parties would have to include a note in their online political files for political ads containing AI-generated content disclosing the use of such content.  Additional key features of the NPRM are described below.Continue Reading FCC Proposes Labeling and Disclosure Rules for AI-Generated Content in Political Ads

On 12 July 2024, EU lawmakers published the EU Artificial Intelligence Act (“AI Act”), a first-of-its-kind regulation aiming to harmonise rules on AI models and systems across the EU. The AI Act prohibits certain AI practices, and sets out regulations on “high-risk” AI systems, certain AI systems that pose transparency risks, and general-purpose AI (“GPAI”) models.

The AI Act’s regulations will take effect in different stages.  Rules regarding prohibited practices will apply as of 2 February 2025; obligations on GPAI models will apply as of 2 August 2025; and both transparency obligations and obligations on high-risk AI systems will apply as of 2 August 2026.  That said, there are exceptions for high-risk AI systems and GPAI models already placed on the market:  Continue Reading EU Artificial Intelligence Act Published

With most state legislative sessions across the country adjourned or winding down without enacting significant artificial intelligence legislation, Colorado and California continue their steady drive to adopt comprehensive legislation regulating the development and deployment of AI systems. 

Colorado

Although Colorado’s AI law (SB 205), which Governor Jared Polis (D) signed into law in May, does not take effect until February 1, 2026, lawmakers have already begun a process for refining the nation’s first comprehensive AI law.  As we described here, the new law will require developers and deployers of “high-risk” AI systems to comply with certain requirements in order to mitigate risks of algorithmic discrimination. 

On June 13, Governor Polis, Attorney General Phil Weiser (D), and Senate Majority Leader Robert Rodriguez (D) issued a public letter announcing a “process to revise” the new law before it even takes effect, and “minimize unintended consequences associated with its implementation.”  The revision process will address concerns that the high cost of compliance will adversely affect “home grown businesses” in Colorado, including through “barriers to growth and product development, job losses, and a diminished capacity to raise capital.”Continue Reading Colorado and California Continue to Refine AI Legislation as Legislative Sessions Wane