Photo of Jayne Ponder

Jayne Ponder

Jayne Ponder provides strategic advice to national and multinational companies across industries on existing and emerging data privacy, cybersecurity, and artificial intelligence laws and regulations.

Jayne’s practice focuses on helping clients launch and improve products and services that involve laws governing data privacy, artificial intelligence, sensitive data and biometrics, marketing and online advertising, connected devices, and social media. For example, Jayne regularly advises clients on the California Consumer Privacy Act, Colorado AI Act, and the developing patchwork of U.S. state data privacy and artificial intelligence laws. She advises clients on drafting consumer notices, designing consent flows and consumer choices, drafting and negotiating commercial terms, building consumer rights processes, and undertaking data protection impact assessments. In addition, she routinely partners with clients on the development of risk-based privacy and artificial intelligence governance programs that reflect the dynamic regulatory environment and incorporate practical mitigation measures.

Jayne routinely represents clients in enforcement actions brought by the Federal Trade Commission and state attorneys general, particularly in areas related to data privacy, artificial intelligence, advertising, and cybersecurity. Additionally, she helps clients to advance advocacy in rulemaking processes led by federal and state regulators on data privacy, cybersecurity, and artificial intelligence topics.

As part of her practice, Jayne also advises companies on cybersecurity incident preparedness and response, including by drafting, revising, and testing incident response plans, conducting cybersecurity gap assessments, engaging vendors, and analyzing obligations under breach notification laws following an incident.

Jayne maintains an active pro bono practice, including assisting small and nonprofit entities with data privacy topics and elder estate planning.

This quarterly update highlights key legislative, regulatory, and litigation developments in the third quarter of 2024 related to artificial intelligence (“AI”) and connected and automated vehicles (“CAVs”).  As noted below, some of these developments provide industry with the opportunity for participation and comment.

I.     Artificial Intelligence

Federal Legislative Developments

There continued to be strong bipartisan

Continue Reading U.S. Tech Legislative, Regulatory & Litigation Update – Third Quarter 2024

On October 22, the National Institute of Standards and Technology (“NIST”) Internet of Things (“IoT”) Advisory Board released the Internet of Things Advisory Board Report, which concludes that IoT development has progressed more slowly than anticipated and identifies 26 findings that explain the slower pace of development and growth.  The Report offers 104 recommendations on how the government can help foster IoT development.  The Advisory Board provided this report to the IoT Federal Working Group emphasizing that an IoT transformation will boost U.S. economic growth, increase public safety and national resilience, create a more sustainable planet, individualize healthcare, foster equitable quality of life and well-being, and facilitate autonomous operations of our national infrastructure.  For background, the IoT Federal Working Group was established by Congress in 2020 and was charged with identifying policies and statutes inhibiting IoT development and consider recommendations of the Advisory Board. Continue Reading NIST Report and Recommendations on Fostering Development of the Internet of Things

On October 28, Texas State Representative Giovanni Capriglione (R-Tarrant County) released a draft of the Texas Responsible AI Governance Act (“TRAIGA”), after nearly a year collecting input from industry stakeholders.  Representative Capriglione, who authored Texas’s Data Privacy and Security Act (discussed here) and currently co-chairs the state’s AI Advisory Council, appears likely to introduce TRAIGA in the upcoming legislative session scheduled to begin on January 14, 2025.  Modeled after the Colorado AI Act (SB 205) (discussed here) and the EU AI Act, TRAIGA would establish obligations for developers, deployers, and distributors of “high-risk AI systems.”  Additionally, TRAIGA would establish an “AI Regulatory Sandbox Program” for participating AI developers to test AI systems under a statutory exemption.

Although a number of states have expressed significant interest in AI regulation, if passed, Texas would become the second state to enact industry-agnostic, risk-based AI legislation, following the passage of the Colorado AI Act in May.  There is significant activity in other states as well, as the California Privacy Protection Agency considers rules that would apply to certain automated decision and AI systems, and other states are expected to introduce AI legislation in the new session.  In addition to its requirements for high-risk AI and its AI sandbox program, TRAIGA would amend Texas’s Data Privacy and Security Act to incorporate AI-specific provisions and would provide for an AI workforce grant program and a new “AI Council” to provide advisory opinions and guidance on AI.Continue Reading Texas Legislature to Consider Sweeping AI Legislation in 2025

Earlier this month, lawmakers released a discussion draft of a proposed federal privacy bill, the American Privacy Rights Act of 2024 (the “APRA”).  While the draft aims to introduce a comprehensive federal privacy statute for the U.S., it contains some notable provisions that could potentially affect the development and use of artificial intelligence systems.  These provisions include the following:Continue Reading Certain Provisions in the American Privacy Rights Act of 2024 Could Potentially Affect AI

This quarterly update highlights key legislative, regulatory, and litigation developments in the first quarter of 2024 related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), and data privacy and cybersecurity.  As noted below, some of these developments provide industry with the opportunity for participation and comment.Continue Reading U.S. Tech Legislative, Regulatory & Litigation Update – First Quarter 2024

On March 28, the White House Office of Management and Budget (OMB) released guidance on governance and risk management for federal agency use of artificial intelligence (AI).  The guidance was issued in furtherance of last fall’s White House AI Executive Order, which established goals to promote the safe, secure, and trustworthy use and development of AI systems.Continue Reading OMB Issues First Governmentwide AI Policy for Federal Agencies

State lawmakers are pursuing a variety of legislative proposals aimed at regulating the development and use of artificial intelligence (“AI”).  In the past two months, legislators in Florida, New Mexico, Utah, and Washington passed legislation regulating AI-generated content, and Utah’s legislature passed legislation regulating generative AI and establishing a state test bed for evaluating future AI regulations.  These are just a sampling of the wave of legislative proposals advancing in states across the country.Continue Reading State Lawmakers Pass Flurry of AI Legislation

On January 30, 2024, the U.S. Office of Management and Budget (OMB) published a request for information (RFI) soliciting public input on how agencies can be more effective in their use of privacy impact assessments (PIAs) to mitigate privacy risks, including those “exacerbated by artificial intelligence (AI).”  The RFI notes that federal agencies may develop

Continue Reading OMB Publishes Request for Information on Agency Privacy Impact Assessments

On January 29, 2024, the Department of Commerce (“Department”) published a proposed rule (“Proposed Rule”) to require providers and foreign resellers of U.S. Infrastructure-as-a-Service (“IaaS”) products to (i) verify the identity of their foreign customers and (ii) notify the Department when a foreign person transacts with that provider or reseller to train a large artificial intelligence (“AI”) model with potential capabilities that could be used in malicious cyber-enabled activity. The proposed rule also contemplates that the Department may impose special measures to be undertaken by U.S. IaaS providers to deter foreign malicious cyber actors’ use of U.S. IaaS products.  The accompanying request for comments has a deadline of April 29, 2024.Continue Reading Department of Commerce Issues Proposed Rule to Regulate Infrastructure-as-a-Service Providers and Resellers

U.S. policymakers have continued to express interest in legislation to regulate artificial intelligence (“AI”), particularly at the state level.  Although comprehensive AI bills and frameworks in Congress have received substantial attention, state legislatures also have been moving forward with their own efforts to regulate AI.  This blog post summarizes key themes in state AI bills introduced in the past year.  Now that new state legislative sessions have commenced, we expect to see even more activity in the months ahead.Continue Reading Trends in AI:  U.S. State Legislative Developments