Photo of Jayne Ponder

Jayne Ponder

Jayne Ponder provides strategic advice to national and multinational companies across industries on existing and emerging data privacy, cybersecurity, and artificial intelligence laws and regulations.

Jayne’s practice focuses on helping clients launch and improve products and services that involve laws governing data privacy, artificial intelligence, sensitive data and biometrics, marketing and online advertising, connected devices, and social media. For example, Jayne regularly advises clients on the California Consumer Privacy Act, Colorado AI Act, and the developing patchwork of U.S. state data privacy and artificial intelligence laws. She advises clients on drafting consumer notices, designing consent flows and consumer choices, drafting and negotiating commercial terms, building consumer rights processes, and undertaking data protection impact assessments. In addition, she routinely partners with clients on the development of risk-based privacy and artificial intelligence governance programs that reflect the dynamic regulatory environment and incorporate practical mitigation measures.

Jayne routinely represents clients in enforcement actions brought by the Federal Trade Commission and state attorneys general, particularly in areas related to data privacy, artificial intelligence, advertising, and cybersecurity. Additionally, she helps clients to advance advocacy in rulemaking processes led by federal and state regulators on data privacy, cybersecurity, and artificial intelligence topics.

As part of her practice, Jayne also advises companies on cybersecurity incident preparedness and response, including by drafting, revising, and testing incident response plans, conducting cybersecurity gap assessments, engaging vendors, and analyzing obligations under breach notification laws following an incident.

Jayne maintains an active pro bono practice, including assisting small and nonprofit entities with data privacy topics and elder estate planning.

This quarterly update highlights key legislative, regulatory, and litigation developments in the first quarter of 2025 related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), and cryptocurrencies and blockchain. 

I. Artificial Intelligence

A. Federal Legislative Developments

In the first quarter, members of Congress introduced several AI bills addressing national security, including bills that would encourage the use of AI for border security and drug enforcement purposes.  Other AI legislative proposes focused on workforce skills, international investment in critical industries, U.S. AI supply chain resilience, and AI-enabled fraud.  Notably, members of Congress from both parties advanced legislation to regulate AI deepfakes and codify the National AI Research Resource, as discussed below.

  • CREATE AI Act:  In March, Reps. Jay Obernolte (R-CA) and Don Beyer (D-VA) re-introduced the Creating Resources for Every American To Experiment with Artificial Intelligence (“CREATE AI”) Act (H.R. 2385), following its introduction and near passage in the Senate last year.  The CREATE AI Act would codify the National AI Research Resource (“NAIRR”), with the goal of advancing AI development and innovation by offering AI computational resources, common datasets and repositories, educational tools and services, and AI testbeds to individuals, private entities, and federal agencies.  The CREATE AI Act builds on the work of the NAIRR Task Force, established by the National AI Initiative Act of 2020, which issued a final report in January 2023 recommending the establishment of NAIRR.

Continue Reading U.S. Tech Legislative & Regulatory Update – First Quarter 2025

On March 18, the Joint California Policy Working Group on AI Frontier Models (the “Working Group”) released its draft report on the regulation of foundation models, with the aim of providing an “evidence-based foundation for AI policy decisions” in California that “ensure[s] these powerful technologies benefit society globally while reasonably managing emerging risks.”  The Working Group was established by California Governor Gavin Newsom (D) in September 2024, following his veto of California State Senator Scott Wiener (D-San Francisco)’s Safe & Secure Innovation for Frontier AI Models Act (SB 1047).  The Working Group builds on California’s partnership with Stanford University and the University of California, Berkeley, established by Governor Newsom’s 2023 Executive Order on generative AI.

Noting that “foundation model capabilities have rapidly improved” since the veto of SB 1047 and that California’s “unique opportunity” to shape AI governance “may not remain open indefinitely,” the report assesses transparency, third-party risk assessment, and adverse event reporting requirements as key components for foundation model regulation.Continue Reading California Frontier AI Working Group Issues Report on Foundation Model Regulation

State lawmakers are considering a diverse array of AI legislation, with hundreds of bills introduced in 2025.  As described further in this blog post, many of these AI legislative proposals fall into several key categories: (1) comprehensive consumer protection legislation similar to the Colorado AI Act, (2) sector-specific legislation on automated decision-making, (3) chatbot regulation, (4) generative AI transparency requirements, (5) AI data center and energy usage requirements, and (6) frontier model public safety legislation.  Although these categories represent just a subset of current AI legislative activity, they illustrate the major priorities of state legislatures and highlight new AI laws that may be on the horizon.Continue Reading State Legislatures Consider New Wave of 2025 AI Legislation

This quarterly update highlights key legislative, regulatory, and litigation developments in the third quarter of 2024 related to artificial intelligence (“AI”) and connected and automated vehicles (“CAVs”).  As noted below, some of these developments provide industry with the opportunity for participation and comment.

I.     Artificial Intelligence

Federal Legislative Developments

There continued to be strong bipartisan

Continue Reading U.S. Tech Legislative, Regulatory & Litigation Update – Third Quarter 2024

On October 22, the National Institute of Standards and Technology (“NIST”) Internet of Things (“IoT”) Advisory Board released the Internet of Things Advisory Board Report, which concludes that IoT development has progressed more slowly than anticipated and identifies 26 findings that explain the slower pace of development and growth.  The Report offers 104 recommendations on how the government can help foster IoT development.  The Advisory Board provided this report to the IoT Federal Working Group emphasizing that an IoT transformation will boost U.S. economic growth, increase public safety and national resilience, create a more sustainable planet, individualize healthcare, foster equitable quality of life and well-being, and facilitate autonomous operations of our national infrastructure.  For background, the IoT Federal Working Group was established by Congress in 2020 and was charged with identifying policies and statutes inhibiting IoT development and consider recommendations of the Advisory Board. Continue Reading NIST Report and Recommendations on Fostering Development of the Internet of Things

On October 28, Texas State Representative Giovanni Capriglione (R-Tarrant County) released a draft of the Texas Responsible AI Governance Act (“TRAIGA”), after nearly a year collecting input from industry stakeholders.  Representative Capriglione, who authored Texas’s Data Privacy and Security Act (discussed here) and currently co-chairs the state’s AI Advisory Council, appears likely to introduce TRAIGA in the upcoming legislative session scheduled to begin on January 14, 2025.  Modeled after the Colorado AI Act (SB 205) (discussed here) and the EU AI Act, TRAIGA would establish obligations for developers, deployers, and distributors of “high-risk AI systems.”  Additionally, TRAIGA would establish an “AI Regulatory Sandbox Program” for participating AI developers to test AI systems under a statutory exemption.

Although a number of states have expressed significant interest in AI regulation, if passed, Texas would become the second state to enact industry-agnostic, risk-based AI legislation, following the passage of the Colorado AI Act in May.  There is significant activity in other states as well, as the California Privacy Protection Agency considers rules that would apply to certain automated decision and AI systems, and other states are expected to introduce AI legislation in the new session.  In addition to its requirements for high-risk AI and its AI sandbox program, TRAIGA would amend Texas’s Data Privacy and Security Act to incorporate AI-specific provisions and would provide for an AI workforce grant program and a new “AI Council” to provide advisory opinions and guidance on AI.Continue Reading Texas Legislature to Consider Sweeping AI Legislation in 2025

Earlier this month, lawmakers released a discussion draft of a proposed federal privacy bill, the American Privacy Rights Act of 2024 (the “APRA”).  While the draft aims to introduce a comprehensive federal privacy statute for the U.S., it contains some notable provisions that could potentially affect the development and use of artificial intelligence systems.  These provisions include the following:Continue Reading Certain Provisions in the American Privacy Rights Act of 2024 Could Potentially Affect AI

This quarterly update highlights key legislative, regulatory, and litigation developments in the first quarter of 2024 related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), and data privacy and cybersecurity.  As noted below, some of these developments provide industry with the opportunity for participation and comment.Continue Reading U.S. Tech Legislative, Regulatory & Litigation Update – First Quarter 2024

On March 28, the White House Office of Management and Budget (OMB) released guidance on governance and risk management for federal agency use of artificial intelligence (AI).  The guidance was issued in furtherance of last fall’s White House AI Executive Order, which established goals to promote the safe, secure, and trustworthy use and development of AI systems.Continue Reading OMB Issues First Governmentwide AI Policy for Federal Agencies

State lawmakers are pursuing a variety of legislative proposals aimed at regulating the development and use of artificial intelligence (“AI”).  In the past two months, legislators in Florida, New Mexico, Utah, and Washington passed legislation regulating AI-generated content, and Utah’s legislature passed legislation regulating generative AI and establishing a state test bed for evaluating future AI regulations.  These are just a sampling of the wave of legislative proposals advancing in states across the country.Continue Reading State Lawmakers Pass Flurry of AI Legislation