This quarterly update highlights key legislative, regulatory, and litigation developments in the third quarter of 2024 related to artificial intelligence (“AI”) and connected and automated vehicles (“CAVs”).  As noted below, some of these developments provide industry with the opportunity for participation and comment.

I.     Artificial Intelligence

Federal Legislative Developments

There continued to be strong bipartisan interest in passing federal legislation related to AI.  While it has been challenging to pass legislation through this Congress, there remains the possibility that one or more of the more targeted bills that have bipartisan support and Committee approval could advance during the lame duck period.

  • Senate Commerce, Science, and Transportation Committee: Lawmakers in the Senate Commerce, Science, and Transportation Committee moved forward with nearly a dozen AI-related bills, including legislation focused on developing voluntary technical guidelines for AI systems and establishing AI testing and risk assessment frameworks. 
    • In July, the Committee voted to advance the Validation and Evaluation for Trustworthy (VET) Artificial Intelligence Act (S.4769), which was introduced by Senators John Hickenlooper (D-CO) and Shelley Moore Capito (R-WV).  The Act would require the National Institute of Standards and Technology (“NIST”) to develop voluntary guidelines and specifications for internal and external assurances of AI systems, in collaboration with public and private sector organizations. 
    • In August, the Promoting United States Leadership in Standards Act of 2024 (S.3849) was placed on the Senate legislative calendar after advancing out of the Committee in July.  Introduced in February 2024 by Senators Mark Warner (D-VA) and Marsha Blackburn (R-TN), the Act would require NIST to support U.S. involvement in the development of AI technical standards through briefings, pilot programs, and other activities.  
    • In July, the Future of Artificial Intelligence Innovation Act of 2024 (S.4178)— introduced in April by Senators Maria Cantwell (D-CA), Todd Young (R-IN), John Hickenlooper (D-CO), and Marsha Blackburn (R-TN)—was ordered to be reported out of the Committee and gained three additional co-sponsors: Senators Roger F. Wicker (R-MS), Ben Ray Lujan (D-NM), and Kyrsten Sinema (I-AZ).  The Act would codify the AI Safety Institute, which would be required to develop voluntary guidelines and standards for promoting AI innovation through public-private partnerships and international alliances.  
    • In July, the Artificial Intelligence Research, Innovation, and Accountability Act of 2023 (S.3312), passed out of the Committee, as amended.  Introduced in November 2023 by Senators John Thune (R-SD), Amy Klobuchar (D-MN), Roger Wicker (R-MS), John Hickenlooper (D-CO), Ben Ray Lujan (D-NM), and Shelley Moore Capito (R-WV), the Act would establish a comprehensive regulatory framework for “high-impact” AI systems, including testing and evaluation standards, risk assessment requirements, and transparency report requirements.  The Act would also require NIST to develop sector-specific recommendations for agency oversight of high-impact AI, and to research and develop means for distinguishing between content created by humans and AI systems.
  • Senate Homeland Security and Governmental Affairs Committee:  In July, the Senate Homeland Security Committee voted to advance the PREPARED for AI Act (S.4495).  Introduced in June by Senators Gary Peters (D-MI) and Thomas Tillis (R-NC), the Act would establish a risk-based framework for the procurement and use of AI by federal agencies and create a Chief AI Officers Council and agency AI Governance Board to ensure that federal agencies benefit from advancements in AI.
  • National Defense Authorization Act for Fiscal Year 2025:  In August, Senators Gary Peters (D-MI) and Mike Braun (R-IN) proposed an amendment (S.Amdt.3232) to the National Defense Authorization Act for Fiscal Year 2025 (S.4638) (“NDAA”).  The amendment would add the Transparent Automated Governance Act and the AI Leadership Training Act to the NDAA.  The Transparent Automated Governance Act would require the Office of Management and Budget (“OMB”) to issue guidance to agencies to implement transparency practices relating to the use of AI and other automated systems.  The AI Leadership Training Act would require OMB to establish a training program for federal procurement officials on the operational benefits and privacy risks of AI.  The Act would also require the Office of Personnel Management (“OPM”) to establish a training program on AI for federal management officials and supervisors.   

Federal Executive and Regulatory Developments

The White House and federal regulators continued to pursue their AI objectives, relying on existing legal authority to support their activities.  With the upcoming change in administration, new executive branch leadership will have the opportunity to revisit and, if they choose, alter the trajectory of the federal government’s regulation of AI.

  • The White House:  The White House announced, among other AI-related developments, the launch of a new Task Force on AI Datacenter Infrastructure to coordinate policy across the government. The interagency Task Force will be led by the National Economic Council, National Security Council, and White House Deputy Chief of Staff to provide streamlined coordination on policies to advance datacenter development operations in line with economic, national security, and environmental goals.
  • Federal Communications Commission (“FCC”):  FCC Chairwoman Jessica Rosenworcel announced that she had sent letters to nine telecommunications companies seeking answers about the steps they are taking to prevent future fraudulent robocalls that use AI for political purposes. In addition, the FCC published a Notice of Proposed Rulemaking (“NPRM”) that would amend its rules under the Telephone Consumer Protection Act (“TCPA”) to incorporate new consent and disclosure requirements for the transmission of AI-generated calls and texts. The public comment period ended on October 25, 2024.
  • Federal Trade Commission (“FTC”):  The FTC announced that it has issued orders to eight companies that offer surveillance pricing products and services that incorporate data about consumers’ characteristics and behavior. The orders are aimed at helping the FTC better understand the opaque market for products by third-party intermediaries that claim to use advanced algorithms, AI, and other technologies, along with personal information about consumers. In addition, the FTC announced “Operation AI Comply,” an enforcement sweep involving actions against five companies that rely on AI “as a way to supercharge deceptive or unfair conduct that harms consumers.” 
  • U.S. Patent and Trademark Office (“USPTO”):  The USPTO issued a guidance update on patent subject matter eligibility to address innovation in critical and emerging technologies, including AI. The guidance provides background on the USPTO’s efforts related to AI and subject matter eligibility, an overview of the USPTO’s patent subject matter eligibility guidance, and additional discussion on certain areas of the guidance that are particularly relevant to AI inventions, including discussions of Federal Circuit decisions on subject matter eligibility. The guidance took effect on July 17, 2024.
  • U.S. Copyright Office:  The U.S. Copyright Office released Part 1 of its report, Copyright and Artificial Intelligence, on legal and policy issues related to copyright and AI. Part 1 focuses on the topic of digital replicas, which it defines as “video[s], image[s], or audio recording[s] that ha[ve] been digitally created or manipulated to realistically but falsely depict an individual.”  The report recommends that Congress enact a federal digital replica law to protect individuals from the knowing distribution of unauthorized digital replicas.
  • Department of Homeland Security: (“DHS”):  DHS Secretary Alejandro N. Mayorkas and Chief AI Officer Eric Hysen announced the first ten members of the “AI Corps,” DHS’s first-ever sprint to recruit 50 AI technology experts.  The new hires are intended to play pivotal roles in DHS efforts to responsibly leverage AI across strategic mission areas.  The ten inaugural AI Corps hires are technology experts with backgrounds in AI and machine learning (“ML”), data science, data engineering, program and product management, software engineering, cybersecurity, and the safe and responsible use of these technologies.

State Legislative Developments

States continued to pursue and enact new laws affecting the development, distribution and/or use of AI, expanding the legal patchwork of AI laws across the United States.

  • Algorithmic Discrimination & Consumer Protection:  Illinois enacted HB 3773, which amends the Illinois Human Rights Act to require employers to notify employees if they are using AI for employment-related decisions.  HB 3773 also prohibits the use of AI systems for employment decisions if the use results in discriminatory effects on the basis of protected classes or if the AI system uses zip codes as a proxy for protected classes. 

Following the enactment of the Colorado AI Act (SB 205) in May, Colorado Attorney General Phil Weiser issued a request for public input on a list of pre-rulemaking considerations to inform future rulemaking and the ongoing effort to revise the law announced by state officials in June.  The Attorney General is specifically seeking comment on SB 205’s developer, deployer, and “high-risk AI” definitions, documentation and impact assessment requirements, and consistency with laws in other jurisdictions, among other topics.  The informal input on rulemaking and revisions must be submitted through an online comment portal by December 30, 2024, and will be posted on the Attorney General’s comment website after receipt.

  • Election-Related Synthetic Content Laws:  Hawaii enacted SB 2687, prohibiting the distribution of materially deceptive AI-generated political advertisements during election years.  California enacted AB 2839, prohibiting the distribution of AI-generated election communications that depict election candidates, officials, or voting equipment within six months of an election, and New Hampshire enacted HB 1596, prohibiting the distribution of deepfakes of election candidates, officials, or parties within three months of an election.  California also enacted AB 2355, which requires AI disclaimers on political advertisements with content generated or substantially altered by AI.  Finally, California enacted the Defending Democracy from Deepfake Deception Act (AB 2655), which requires online platforms to block deceptive AI-generated election content within six months of an election, label deceptive AI-generated election content within one year of an election, and provide users with mechanisms to report deceptive AI-generated election content.
  • AI-Generated CSAM & Intimate Imagery Laws:  North Carolina enacted HB 591, prohibiting the disclosure or threatened disclosure of AI-generated intimate imagery with intent to harm the person depicted and the creation or distribution of AI-generated CSAM.  New Hampshire enacted HB 1432, which prohibits the creation or distribution of deepfakes with intent to cause financial or reputational harm. California enacted three laws regulating AI-generated CSAM or intimate imagery: SB 926, which prohibits the creation and distribution of digital or computer-generated intimate imagery that causes severe emotional distress, AB 1831, which prohibits the possession, distribution, or creation of AI-generated CSAM, and SB 981, which requires online platforms to remove, and provide mechanisms for users to report, AI-generated sexually explicit deepfakes on the platform.
  • Laws Regulating AI-Generated Impersonations & Digital Replicas:  Illinois and California each enacted laws regulating the creation or use of AI-generated digital replicas.  Illinois HB 4875 amends the Illinois Right of Publicity Act to prohibit the distribution of unauthorized digital replicas, and California AB 1836 prohibits the production or distribution of digital replicas of deceased persons for commercial purposes without consent.  Illinois and California also enacted laws regulating personal or professional services contracts that allow for the creation or use of digital replicas.  The Illinois Digital Voice & Likeness Protection Act (HB 4762) and California AB 2602 both require such contracts to include reasonably specific descriptions of the intended uses of digital replicas and require adequate representation for performers.
  • Generative AI Transparency & Disclosure Laws:  California enacted two laws that impose transparency and disclosure requirements for generative AI systems or services.  The California AI Transparency Act (SB 942) requires providers of generative AI systems with over 1 million monthly users to provide AI content detection tools and optional visible watermarks on AI-generated content.  Providers must also automatically add metadata disclosures to any content created using the provider’s generative AI system.  California AB 2013 requires developers of publicly available generative AI systems or services to post “high-level summaries” of datasets used to develop generative AI on their public websites, including information about the sources or owners of datasets and whether the datasets include personal information or data protected by copyright, trademark, or patent. 

AI Litigation Developments

  • New Complaints with New Theories:  
  • Right of Publicity Complaint:  On August 29, two professional voice actors, along with the authors and publishers who own copyrights in the audiobooks they voiced, sued AI-powered text-to-speech company Eleven Labs for alleged misappropriation of the voice actors’ voices and likenesses.  The complaint brought claims for (1) Texas Common law Invasion of Privacy via Misappropriation of Likeness and Right of Publicity, (2) Texas Unjust Enrichment, (3) Misappropriation of Likeness and Publicity under New York Civil Rights Law Section 51, and (4) Violation of the DMCA Anticircumvention Provisions, 17 U.S.C. §§ 1201 and 1203.  Vacker v. Eleven Labs Inc., 1:24-cv-00987 (D. Del.).
  • Patent and Antitrust Complaint:  On September 5, Xockets filed suit against Nvidia, Microsoft, and RPX for allegedly appropriating its patented data processing unit (DPU) technology and committing antitrust violations, including forming a buyers’ cartel and seeking to monopolize the AI industry.  Xockets seeks to enjoin the release of Nvidia’s new Blackwell GPU-enabled AI servers as well as Microsoft’s use of DPU technology in its generative AI platforms.  Xockets, Inc. v. Nvidia Corp., 6:24-cv-453 (W.D. Tex.). 
  • Criminal Indictment:  On September 4, the U.S. Department of Justice announced the unsealing of a three-count criminal indictment against Michael Smith in connection with a purported scheme to use GenAI to create hundreds of thousands of songs and use bots to stream them billions of times, allegedly generating more than $10 million in fraudulent royalty payments.  United States v. Smith, 24-cr-504 (S.D.N.Y.).
  • Notable Case Developments:  
  • On August 12, the court in Andersen v. Stability AI Ltd., 3:23-cv-00201 (N.D. Cal.), granted in part and denied in part defendants’ motion to dismiss the first amended complaint.  This case involves claims against Stability AI, Runway AI, Midjourney, and DeviantArt regarding alleged infringement of copyrighted images in connection with development and deployment of Stable Diffusion.  For Stability AI, the court found sufficient allegations of “induced” infringement, but dismissed the Digital Millennium Copyright Act (“DMCA”) claims with prejudice.  For Runway AI, the court found that direct infringement and “induced” infringement had been sufficiently pled, based on allegations of Runway’s role in developing and inducing downloads of Stable Diffusion and allegations that “training images remain in and are used by Stable Diffusion.”  For Midjourney, the court found that copyright, false endorsement, and trade dress claims had been sufficiently pled, but dismissed the DMCA claims.  For DeviantArt, the court found that copyright claims had been sufficiently pled, but dismissed the breach of contract and breach of implied covenant claims with prejudice.  For all defendants, the court dismissed the unjust enrichment claims with leave to amend.
  • On August 8, in the consolidated case of In re OpenAI ChatGPT Litigation, 3:23-cv-3223 (N.D. Cal.), the court partially overturned a discovery order requiring plaintiffs to share all methods and data used to test ChatGPT in preparation for litigation.  Instead, the court ordered plaintiffs to disclose only the prompts, outputs, and account settings that produced the results on which the complaint was based, but not the prompts, outputs, or settings that produced results not relied on by the complaint.  On September 24, OpenAI agreed to a “Training Data Inspection Protocol” for disclosure of “data used to train relevant OpenAI LLMs.”   
  • On September 13, the court in The New York Times Company v. Open AI Inc., 1:23-cv-11195 (S.D.N.Y. 2024), denied the defendants’ motion to compel production of “plaintiff’s regurgitation efforts,” as well as its motion to compel discovery of originality and registration of the works at issue, which reached more than ten million works after an amendment to the complaint in August.  This case involves claims against Microsoft and OpenAI regarding alleged infringement of copyrighted articles in connection with the training and deployment of LLMs.

II.   Connected & Automated Vehicles

  • Federal Interest in Accelerating V2X Deployment:  As we reported, on August 16, 2024, the U.S. Department of Transportation (“USDOT”) announced Saving Lives with Connectivity: A Plan to Accelerate V2X Deployment (the “plan”).  The plan is intended to “accelerate the deployment” of vehicle-to-everything technology (“V2X”) and support USDOT’s goal of establishing a comprehensive approach to roadway fatality reduction.  The plan describes V2X as technology that “enables vehicles to communicate with each other, with road users such as pedestrians, cyclists, individuals with disabilities, and other vulnerable road users, and with roadside infrastructure, through wirelessly exchanged messages,” and lays out short-, medium-, and long-term V2X goals for the next twelve years.  These include increasing the deployment of V2X technology across the National Highway System and top metro areas’ signalized intersections, developing interoperability standards, and working with the FCC on spectrum use.  USDOT also intends to coordinate resources across federal agencies to support government deployment of V2X technologies and develop V2X technical assistance and supporting documentation for deployers, including original equipment manufacturers and infrastructure owner-operators.
  • Continued Attention on Connected Vehicle Supply Chain:  As we reported, on September 26, 2024, the Department of Commerce published a notice of proposed rulemaking (“NPRM”) in the Federal Register on Securing the Information and Communications Technology and Services Supply Chain.  This NPRM follows an advance notice of proposed rulemaking (“ANPRM”) from March 1, 2024.  The proposed rule focuses on hardware and software integrated into the Vehicle Connectivity System (“VCS”) and software integrated into the Automated Driving System (“ADS”).  The proposed rule would ban transactions involving such hardware and software designed, developed, manufactured, or supplied by persons owned by, controlled by, or subject to the jurisdiction of the People’s Republic of China and Russia.  The NPRM cites concerns about malicious access to these systems, which adversaries could use to collect sensitive data or remotely manipulate cars.  The proposed rule would apply to all wheeled on-road vehicles, but would exclude vehicles not used on public roads, like agricultural or mining vehicles.

We will continue to update you on meaningful developments in these quarterly updates and across our blogs.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Jennifer Johnson Jennifer Johnson

Jennifer Johnson is a partner specializing in communications, media and technology matters who serves as Co-Chair of Covington’s Technology Industry Group and its global and multi-disciplinary Artificial Intelligence (AI) and Internet of Things (IoT) Groups. She represents and advises technology companies, content distributors…

Jennifer Johnson is a partner specializing in communications, media and technology matters who serves as Co-Chair of Covington’s Technology Industry Group and its global and multi-disciplinary Artificial Intelligence (AI) and Internet of Things (IoT) Groups. She represents and advises technology companies, content distributors, television companies, trade associations, and other entities on a wide range of media and technology matters. Jennifer has almost three decades of experience advising clients in the communications, media and technology sectors, and has held leadership roles in these practices for almost twenty years. On technology issues, she collaborates with Covington’s global, multi-disciplinary team to assist companies navigating the complex statutory and regulatory constructs surrounding this evolving area, including product counseling and technology transactions related to connected and autonomous vehicles, internet connected devices, artificial intelligence, smart ecosystems, and other IoT products and services. Jennifer serves on the Board of Editors of The Journal of Robotics, Artificial Intelligence & Law.

Jennifer assists clients in developing and pursuing strategic business and policy objectives before the Federal Communications Commission (FCC) and Congress and through transactions and other business arrangements. She regularly advises clients on FCC regulatory matters and advocates frequently before the FCC. Jennifer has extensive experience negotiating content acquisition and distribution agreements for media and technology companies, including program distribution agreements, network affiliation and other program rights agreements, and agreements providing for the aggregation and distribution of content on over-the-top app-based platforms. She also assists investment clients in structuring, evaluating, and pursuing potential investments in media and technology companies.

Photo of Nicholas Xenakis Nicholas Xenakis

Nick Xenakis draws on his Capitol Hill experience to provide regulatory and legislative advice to clients in a range of industries, including technology. He has particular expertise in matters involving the Judiciary Committees, such as intellectual property, antitrust, national security, immigration, and criminal…

Nick Xenakis draws on his Capitol Hill experience to provide regulatory and legislative advice to clients in a range of industries, including technology. He has particular expertise in matters involving the Judiciary Committees, such as intellectual property, antitrust, national security, immigration, and criminal justice.

Nick joined the firm’s Public Policy practice after serving most recently as Chief Counsel for Senator Dianne Feinstein (D-CA) and Staff Director of the Senate Judiciary Committee’s Human Rights and the Law Subcommittee, where he was responsible for managing the subcommittee and Senator Feinstein’s Judiciary staff. He also advised the Senator on all nominations, legislation, and oversight matters before the committee.

Previously, Nick was the General Counsel for the Senate Judiciary Committee, where he managed committee staff and directed legislative and policy efforts on all issues in the Committee’s jurisdiction. He also participated in key judicial and Cabinet confirmations, including of an Attorney General and two Supreme Court Justices. Nick was also responsible for managing a broad range of committee equities in larger legislation, including appropriations, COVID-relief packages, and the National Defense Authorization Act.

Before his time on Capitol Hill, Nick served as an attorney with the Federal Public Defender’s Office for the Eastern District of Virginia. There he represented indigent clients charged with misdemeanor, felony, and capital offenses in federal court throughout all stages of litigation, including trial and appeal. He also coordinated district-wide habeas litigation following the Supreme Court’s decision in Johnson v. United States (invalidating the residual clause of the Armed Career Criminal Act).

Photo of Phillip Hill Phillip Hill

Phillip Hill focuses on complex copyright matters with an emphasis on music, film/TV, video games, sports, theatre, and technology.

Phillip’s global practice includes all aspects of copyright and the DMCA, as well as trademark and right of publicity law, and encompasses the full

Phillip Hill focuses on complex copyright matters with an emphasis on music, film/TV, video games, sports, theatre, and technology.

Phillip’s global practice includes all aspects of copyright and the DMCA, as well as trademark and right of publicity law, and encompasses the full spectrum of litigation, transactions, counseling, legislation, and regulation. He regularly represents clients in federal and state court, as well as before the U.S. Copyright Royalty Board, Copyright Office, Patent & Trademark Office, and Trademark Trial & Appeal Board.

Through his work at the firm and prior industry and in-house experience, Phillip has developed a deep understanding of his clients’ industries and regularly advises on cutting-edge topics like generative artificial intelligence, the metaverse, and NFTs. Phillip has been recognized as one of Billboard as a Top Music Lawyers.

In addition to his full-time legal practice, Phillip serves as Chair of the ABA Music and Performing Arts Committee, frequently speaks on emerging trends, is active in educational efforts, and publishes regularly.

Photo of Jayne Ponder Jayne Ponder

Jayne Ponder counsels national and multinational companies across industries on data privacy, cybersecurity, and emerging technologies, including Artificial Intelligence and Internet of Things.

In particular, Jayne advises clients on compliance with federal, state, and global privacy frameworks, and counsels clients on navigating the…

Jayne Ponder counsels national and multinational companies across industries on data privacy, cybersecurity, and emerging technologies, including Artificial Intelligence and Internet of Things.

In particular, Jayne advises clients on compliance with federal, state, and global privacy frameworks, and counsels clients on navigating the rapidly evolving legal landscape. Her practice includes partnering with clients on the design of new products and services, drafting and negotiating privacy terms with vendors and third parties, developing privacy notices and consent forms, and helping clients design governance programs for the development and deployment of Artificial Intelligence and Internet of Things technologies.

Jayne routinely represents clients in privacy and consumer protection enforcement actions brought by the Federal Trade Commission and state attorneys general, including related to data privacy and advertising topics. She also helps clients articulate their perspectives through the rulemaking processes led by state regulators and privacy agencies.

As part of her practice, Jayne advises companies on cybersecurity incident preparedness and response, including by drafting, revising, and testing incident response plans, conducting cybersecurity gap assessments, engaging vendors, and analyzing obligations under breach notification laws following an incident.

Photo of Jemie Fofanah Jemie Fofanah

Jemie Fofanah is an associate in the firm’s Washington, DC office. She is a member of the Privacy and Cybersecurity Practice Group and the Technology and Communication Regulatory Practice Group. She also maintains an active pro bono practice with a focus on criminal…

Jemie Fofanah is an associate in the firm’s Washington, DC office. She is a member of the Privacy and Cybersecurity Practice Group and the Technology and Communication Regulatory Practice Group. She also maintains an active pro bono practice with a focus on criminal defense and family law.

Photo of August Gweon August Gweon

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks…

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks, and policy trends.

August regularly provides advice to clients on privacy and competition frameworks and AI regulations, with an increasing focus on U.S. state AI legislative developments and trends related to synthetic content, automated decision-making, and generative AI. He also assists clients in assessing federal and state privacy regulations like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in public policy discussions and rulemaking processes.

Photo of Andrew Longhi Andrew Longhi

Andrew Longhi advises national and multinational companies across industries on a wide range of regulatory, compliance, and enforcement matters involving data privacy, telecommunications, and emerging technologies.

Andrew’s practice focuses on advising clients on how to navigate the rapidly evolving legal landscape of state…

Andrew Longhi advises national and multinational companies across industries on a wide range of regulatory, compliance, and enforcement matters involving data privacy, telecommunications, and emerging technologies.

Andrew’s practice focuses on advising clients on how to navigate the rapidly evolving legal landscape of state, federal, and international data protection laws. He proactively counsels clients on the substantive requirements introduced by new laws and shifting enforcement priorities. In particular, Andrew routinely supports clients in their efforts to launch new products and services that implicate the laws governing the use of data, connected devices, biometrics, and telephone and email marketing.

Andrew assesses privacy and cybersecurity risk as a part of diligence in complex corporate transactions where personal data is a key asset or data processing issues are otherwise material. He also provides guidance on generative AI issues, including privacy, Section 230, age-gating, product liability, and litigation risk, and has drafted standards and guidelines for large-language machine-learning models to follow. Andrew focuses on providing risk-based guidance that can keep pace with evolving legal frameworks.

Photo of Vanessa Lauber Vanessa Lauber

Vanessa Lauber is an associate in the firm’s New York office and a member of the Data Privacy and Cybersecurity Practice Group, counseling clients on data privacy and emerging technologies, including artificial intelligence.

Vanessa’s practice includes partnering with clients on compliance with federal…

Vanessa Lauber is an associate in the firm’s New York office and a member of the Data Privacy and Cybersecurity Practice Group, counseling clients on data privacy and emerging technologies, including artificial intelligence.

Vanessa’s practice includes partnering with clients on compliance with federal and state privacy laws and FTC and consumer protection laws and guidance. Additionally, Vanessa routinely counsels clients on drafting and developing privacy notices and policies. Vanessa also advises clients on trends in artificial intelligence regulations and helps design governance programs for the development and deployment of artificial intelligence technologies across a number of industries.

Photo of Zoe Kaiser Zoe Kaiser

Zoe Kaiser is an associate in the firm’s San Francisco office, where she is a member of the Litigation and Investigations, Copyright and Trademark Litigation, and Class Actions Practice Groups. She advises on cutting-edge topics such as generative artificial intelligence.

Zoe maintains an…

Zoe Kaiser is an associate in the firm’s San Francisco office, where she is a member of the Litigation and Investigations, Copyright and Trademark Litigation, and Class Actions Practice Groups. She advises on cutting-edge topics such as generative artificial intelligence.

Zoe maintains an active pro bono practice, focusing on media freedom.

Photo of Conor Kane Conor Kane

Conor Kane advises clients on a broad range of privacy, artificial intelligence, telecommunications, and emerging technology matters. He assists clients with complying with state privacy laws, developing AI governance structures, and engaging with the Federal Communications Commission.

Before joining Covington, Conor worked in…

Conor Kane advises clients on a broad range of privacy, artificial intelligence, telecommunications, and emerging technology matters. He assists clients with complying with state privacy laws, developing AI governance structures, and engaging with the Federal Communications Commission.

Before joining Covington, Conor worked in digital advertising helping teams develop large consumer data collection and analytics platforms. He uses this experience to advise clients on matters related to digital advertising and advertising technology.

Photo of Madeleine Dolan Madeleine Dolan

Madeleine (Maddie) Dolan is a litigation associate in the Washington, DC office. She is a member of the Product Liability and Mass Torts litigation group and has an active pro bono criminal defense practice.

Maddie is a stand-up litigator, having first-chaired at a…

Madeleine (Maddie) Dolan is a litigation associate in the Washington, DC office. She is a member of the Product Liability and Mass Torts litigation group and has an active pro bono criminal defense practice.

Maddie is a stand-up litigator, having first-chaired at a pro bono trial as a member of a team that secured their client’s acquittal on first-degree murder charges; she has also taken a deposition in a commercial litigation matter. She also has extensive experience drafting dispositive motions, leading document reviews, developing expert reports, and preparing for depositions and trial.

Prior to joining Covington, Maddie served as a law clerk to U.S. District Judge Mark R. Hornak of the Western District of Pennsylvania in Pittsburgh, PA. She also previously worked as a consultant and strategic communications director, managing marketing campaigns for federal government agencies.

Photo of Jess Gonzalez Valenzuela Jess Gonzalez Valenzuela

Jess Gonzalez Valenzuela (they/them and she/her) is an associate in the firm’s Sn Francisco office and is a member of the Data Privacy and Cybersecurity and Corporate Practice Groups.

Jess helps clients address complex, cutting-edge challenges to manage data privacy and cybersecurity risk, including…

Jess Gonzalez Valenzuela (they/them and she/her) is an associate in the firm’s Sn Francisco office and is a member of the Data Privacy and Cybersecurity and Corporate Practice Groups.

Jess helps clients address complex, cutting-edge challenges to manage data privacy and cybersecurity risk, including by providing regulatory compliance advice in connection with specific business practices and assisting in responding to cybersecurity incidents. Jess also maintains an active pro bono practice.

Jess is committed to DEI efforts in the legal profession, is a member of Covington’s LGBTQ+ and Latino Firm Resource Groups, and is working to develop a first generation professionals network and a disability advocacy network at Covington.

Photo of Javier Andujar Javier Andujar

Javier is an associate in the firm’s New York office. His practice focuses on litigation, internal investigations, and regulatory and public policy issues. Javier maintains an active pro bono practice.