This update highlights key legislative and regulatory developments in the first quarter of 2026 related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), and Internet of Things (“IoT”).

I. Federal AI Legislative Developments

In the first quarter, members of Congress introduced several AI bills related to nonconsensual images, chatbots, support for small businesses, and preemption in response to President Trump’s December 2025 AI Preemption Executive Order.  For example:

  • Nonconsensual AI-Generated Imagery: Following the enactment of the federal TAKE IT DOWN Act, the Senate passed the DEFIANCE Act (S.1837) in January, which would provide individuals who are victims of nonconsensual, AI-generated intimate imagery with a private right of action.  The bill has been “held at the desk” in the House since it passed the Senate, which means that it has not yet been referred to specific committees for consideration.  The delay in referral could mean that the full House could vote on the legislation once there is sufficient support if the relevant committees agree to waive jurisdiction.         
  • Chatbots: Several legislative proposals have focused on chatbot safeguards, including for minor users. For instance, Sen. Ed Markey (D-MA) introduced the Youth AI Privacy Act (S.4199), which would require entities that make AI chatbots available to minors to implement certain safe design features.  In the House of Representatives, Rep. Brett Guthrie (R-KY) introduced the SAFE BOTs Act as part of the KIDS Act (H.R.7757), an omnibus online child safety bill.  The SAFE BOTs Act would require chatbot providers to provide disclosures and implement safety guardrails for minor users, among other requirements.
  • Preemption: Members of Congress continue to debate AI preemption. In March, Reps. Don Beyer (D-VA) and other Democratic lawmakers introduced the GUARDRAILS Act (H.R.8031), which would state that the White House’s AI Preemption Executive Order “shall have no force or effect” and prohibit federal funds from being used for its implementation. In contrast, Sen. Marsha Blackburn (R-TN)’s discussion draft of the TRUMP AMERICA AI Act, discussed in detail below, prohibits preemption of any “generally applicable law,”, and in some cases would expressly prohibit preemption of state laws that are more stringent than, or do not conflict with, the bill’s provisions.
  • Omnibus Bills: Legislators also have proposed comprehensive legislative packages covering a broad range of AI-related topics. For instance, Sen. Marsha Blackburn’s proposed TRUMP AMERICA AI Act contains a number of AI legislative proposals beyond preemption, including the Kids Online Safety Act (online platform minor safeguards), NO FAKES Act (prohibiting unauthorized digital replicas), GUARD Act (companion chatbot minor safeguards), TRAIN Act (copyright and AI model training data), AI LEAD Act (AI product liability standards), AI Risk Evaluation Act (frontier model evaluations), Future of AI Innovation Act (voluntary AI standards), CREATE AI Act (codifying the National AI Resource Resource), and COPIED Act (synthetic content provenance).  Notably, large legislative packages that are policy-focused typically face challenges passing Congress as a whole, though components may survive.  It also bears pointing out that the package contains KOSA, which is not primarily focused on AI and has faced separate issues with passage due to its scope.

II. Federal AI Regulatory Developments

In the first quarter of 2026, the White House and federal agencies took several steps related to AI regulation and AI adoption by federal agencies.  For example:

  • White House:  In March, the Trump Administration released its National Policy Framework for AI, encompassing numerous AI-related recommendations to Congress that it framed as promoting a “light touch” approach to AI regulation, protections for minors, IP protection, free speech, innovation, and protection of workers.  The framework also calls for preempting state AI laws that “impose undue burdens” on AI development and use.
  • Department of Justice:  In January, the Department of Justice established its AI Litigation Task Force, which has the “sole responsibility” to challenge state AI laws that unconstitutionally regulate interstate commerce, are preempted by existing Federal regulations, or “are otherwise unlawful in the Attorney General’s judgment.”
  • NIST:  The National Institute of Standards and Technology (“NIST”) launched several initiatives focused on establishing standards for agentic AI systems.  In January, NIST’s Center for AI Standards and Innovation (“CAISI”) issued a Request for Information related to practices and methodologies for measuring and improving the secure development and deployment of agentic systems.  NIST launched the AI Agent Standards Initiative to support the development of industry standards for agents and a concept paper on agentic identity standards.

III. State AI Legislative Developments

State lawmakers have introduced over 600 AI bills with requirements for private entities in the 2026 legislative sessions so far. Enacted and/or passed (but not yet enacted) laws show a continued focus on companion chatbots; AI transparency; digital replicas and other synthetic content; and the use of AI by mental health providers and health insurers. For example:

  • Chatbot Safety: AI companions and chatbot safety continued to be a focus of state lawmakers this quarter, with new laws enacted in Washington (HB 2225), Oregon (SB 1546), and Idaho (Conversational AI Safety Act (SB 1297)).  Oregon SB 1546 establishes disclosure and mental health protocol requirements for “operators,” i.e., entities that make publicly available or control access to “AI companion chatbots.”  In addition to these requirements, Washington HB 2225 also will require operators to implement reasonable measures to prevent AI companion chatbots from claiming to be human or engaging in manipulative engagement techniques, while Idaho SB 1297 will require operators to provide tools for managing “privacy and account settings” to users and parents of users under 13.
  • Transparency & Content Provenance:  Multiple states have adopted or may soon adopt transparency requirements similar to those in the 2025 California AI Transparency Act. New laws in Utah (HB 276) and Washington (HB 1170) will require certain providers of genAI systems to include “latent disclosures,” with Washington also requiring covered providers to provide free “provenance detection tools” and optional “manifest disclosures.”  Additionally, New York lawmakers passed A3411, which awaits the Governor’s signature and would require certain entities to display a notice that the outputs of generative AI systems “may be inaccurate.”  
  • Harmful AI-Generated Content Regulation: State lawmakers enacted laws regulating the creation or distribution of harmful AI-generated content. Wyoming (HB 102) and Utah (HB 276) focus on restricting creation or distribution of nonconsensual AI-generated sexual material, with Wyoming’s law also prohibiting the development or distribution of AI systems designed, intended, or known to be used to (1) create, promote, or distribute AI-generated sexual material or child pornography, or (2) promote self-harm. Utah also enacted SB 256, establishing an individual right to consent to the use of one’s “personal identity” created through generative AI, and prohibiting the use of generative AI as a defense to a slander or libel claim.
  • Health Insurance & Healthcare: Lawmakers in multiple states passed laws to regulate the use of AI in healthcare settings. Indiana (HB 1271), Utah (SB 319), and Washington (SB 5395) enacted new laws regulating the use of AI by health insurers to evaluate claims and prohibiting health insurers from using AI as a sole basis for denying or modifying claims.  Legislation passed in Tennessee (SB 1580) and Delaware (HB 191), if signed by their respective governors, would prohibit AI systems from being represented or marketed as qualified mental health professionals or licensed professional healthcare workers, respectively.

Additionally, Governor Jared Polis released a draft bill that would replace the 2024 Colorado AI Act to impose requirements on developers and deployers of covered automated decision-making technology (“ADMT”), i.e., ADMT that is used to “materially influence a consequential decision.”  We will continue to closely monitor changes to the Colorado law.

IV. Connected & Automated Vehicles

The first quarter of 2026 brought activity related to CAV legislation, enforcement, and regulation. For example:

  • Federal Legislative Activity: Federal legislators considered a number of CAV-related bills this quarter. On January 13, the House Energy and Commerce Committee’s Subcommittee on Commerce, Manufacturing, and Trade held a hearing covering a number of CAV-related bills, including the Safely Ensuring Lives Future Deployment and Research In Vehicle Evolution Act of 2026 (the “SELF DRIVE Act of 2026”) (H.R.7390) (Rep. Latta (R-OH)), which would create a federal framework for AV deployment.  (The America Drives Act (H.R.4661), introduced in 2025, proposes a similar framework for deployment of large, commercial AVs.)  The Senate Commerce, Science, and Transportation Committee also held a hearing on CAV development, safety, and regulation on February 4.
  • NHTSA:  NHTSA continues to focus on updating its regulatory approach to accommodate advances in CAV technology.  On January 23, NHTSA requested input on how the U.S. should proceed with respect to a proposed UN draft Global Technical Regulation for Automated Driving Systems; it received more than fifty comments in response.  On March 10, NHTSA held an AV Safety Forum, which covered advances in CAV technology and steps the agency is taking to support CAV innovation and safety.  Speakers included Transportation Secretary Sean Duffy, NHTSA Administrator Jonathan Morrison, White House Office of Science and Technology Policy Director Michael Kratsios, and representatives from Zoox, Waymo, Uber, and other industry members.  The agency announced plans to update a number of safety rules that don’t account for AVs and to roll out new voluntary technical guidance for the industry.
  • FTC Settlement: On January 14, the FTC issued a settlement with GM and OnStar to resolve the FTC’s January 2025 complaint alleging that GM used a misleading enrollment process to sign up consumers for its OnStar connected vehicle service in violation of Section 5 of the FTC Act. The FTC’s complaint also had alleged that GM failed to clearly disclose that it collected consumers’ precise geolocation and driving behavior via an OnStar feature and sold that data to third parties without consumers’ consent.

V. Internet of Things

In the first quarter of 2026, the FCC reopened applications for Lead Administrator and Label Administrators as part of its Cyber Trust Mark program following the FCC’s original selections for 11 Label Administrators and Lead Administrator in 2024.  The new application period for Administrators comes after the company formerly in the Lead Administrator role withdrew from this position in December 2025.  No new selections have been announced to date.

We will continue to update you on meaningful developments in these quarterly updates and across our blogs.  Please also stay tuned for our upcoming quarterly video briefings on AI developments!

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Jennifer Johnson Jennifer Johnson

Jennifer Johnson is a partner specializing in communications, media and technology matters who serves as Co-Chair of Covington’s Technology Industry Group and its global and multi-disciplinary Artificial Intelligence (AI) and Internet of Things (IoT) Groups. She represents and advises technology companies, content distributors…

Jennifer Johnson is a partner specializing in communications, media and technology matters who serves as Co-Chair of Covington’s Technology Industry Group and its global and multi-disciplinary Artificial Intelligence (AI) and Internet of Things (IoT) Groups. She represents and advises technology companies, content distributors, television companies, trade associations, and other entities on a wide range of media and technology matters. Jennifer has three decades of experience advising clients in the communications, media and technology sectors, and has held leadership roles in these practices for more than twenty years. On technology issues, she collaborates with Covington’s global, multi-disciplinary team to assist companies navigating the complex statutory and regulatory constructs surrounding this evolving area, including product counseling and technology transactions related to connected and autonomous vehicles, internet connected devices, artificial intelligence, smart ecosystems, and other IoT products and services. Jennifer serves on the Board of Editors of The Journal of Robotics, Artificial Intelligence & Law.

Jennifer assists clients in developing and pursuing strategic business and policy objectives before the Federal Communications Commission (FCC) and Congress and through transactions and other business arrangements. She regularly advises clients on FCC regulatory matters and advocates frequently before the FCC. Jennifer has extensive experience negotiating content acquisition and distribution agreements for media and technology companies, including program distribution agreements, network affiliation and other program rights agreements, and agreements providing for the aggregation and distribution of content on over-the-top app-based platforms. She also assists investment clients in structuring, evaluating, and pursuing potential investments in media and technology companies.

Photo of Nicholas Xenakis Nicholas Xenakis

Nick Xenakis draws on his Capitol Hill and legal experience to provide public policy and crisis management counsel to clients in a range of industries.

Nick assists clients in developing and implementing policy solutions to litigation and regulatory matters, including on issues involving…

Nick Xenakis draws on his Capitol Hill and legal experience to provide public policy and crisis management counsel to clients in a range of industries.

Nick assists clients in developing and implementing policy solutions to litigation and regulatory matters, including on issues involving antitrust, artificial intelligence, bankruptcy, criminal justice, financial services, immigration, intellectual property, life sciences, national security, and technology. He also represents companies and individuals in investigations before U.S. Senate and House Committees.

Nick previously served as General Counsel for the U.S. Senate Judiciary Committee, where he managed committee staff and directed legislative efforts. He also participated in key judicial and Cabinet confirmations, including of Attorneys General and Supreme Court Justices. Before his time on Capitol Hill, Nick served as an attorney with the Federal Public Defender’s Office for the Eastern District of Virginia.

Photo of Jayne Ponder Jayne Ponder

Jayne Ponder provides strategic advice to national and multinational companies across industries on existing and emerging data privacy, cybersecurity, and artificial intelligence laws and regulations.

Jayne’s practice focuses on helping clients launch and improve products and services that involve laws governing data privacy…

Jayne Ponder provides strategic advice to national and multinational companies across industries on existing and emerging data privacy, cybersecurity, and artificial intelligence laws and regulations.

Jayne’s practice focuses on helping clients launch and improve products and services that involve laws governing data privacy, artificial intelligence, sensitive data and biometrics, marketing and online advertising, connected devices, and social media. For example, Jayne regularly advises clients on the California Consumer Privacy Act, Colorado AI Act, and the developing patchwork of U.S. state data privacy and artificial intelligence laws. She advises clients on drafting consumer notices, designing consent flows and consumer choices, drafting and negotiating commercial terms, building consumer rights processes, and undertaking data protection impact assessments. In addition, she routinely partners with clients on the development of risk-based privacy and artificial intelligence governance programs that reflect the dynamic regulatory environment and incorporate practical mitigation measures.

Jayne routinely represents clients in enforcement actions brought by the Federal Trade Commission and state attorneys general, particularly in areas related to data privacy, artificial intelligence, advertising, and cybersecurity. Additionally, she helps clients to advance advocacy in rulemaking processes led by federal and state regulators on data privacy, cybersecurity, and artificial intelligence topics.

As part of her practice, Jayne also advises companies on cybersecurity incident preparedness and response, including by drafting, revising, and testing incident response plans, conducting cybersecurity gap assessments, engaging vendors, and analyzing obligations under breach notification laws following an incident.

Jayne maintains an active pro bono practice, including assisting small and nonprofit entities with data privacy topics and elder estate planning.

Photo of August Gweon August Gweon

August Gweon counsels national and multinational companies on new regulatory frameworks governing artificial intelligence, robotics, and other emerging technologies, digital services, and digital infrastructure. August leverages his AI and technology policy experiences to help clients understand AI industry developments, emerging risks, and policy…

August Gweon counsels national and multinational companies on new regulatory frameworks governing artificial intelligence, robotics, and other emerging technologies, digital services, and digital infrastructure. August leverages his AI and technology policy experiences to help clients understand AI industry developments, emerging risks, and policy and enforcement trends. He regularly advises clients on AI governance, risk management, and compliance under data privacy, consumer protection, safety, procurement, and platform laws.

August’s practice includes providing comprehensive advice on U.S. state and federal AI policies and legislation, including the Colorado AI Act and state laws regulating automated decision-making technologies, AI-generated content, generative AI systems and chatbots, and foundation models. He also assists clients in assessing risks and compliance under federal and state privacy laws like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in AI public policy advocacy and rulemaking.

Photo of Conor Kane Conor Kane

Conor Kane advises clients on a broad range of privacy, artificial intelligence, telecommunications, and emerging technology matters. He assists clients with complying with state privacy laws, developing AI governance structures, and engaging with the Federal Communications Commission.

Before joining Covington, Conor worked in…

Conor Kane advises clients on a broad range of privacy, artificial intelligence, telecommunications, and emerging technology matters. He assists clients with complying with state privacy laws, developing AI governance structures, and engaging with the Federal Communications Commission.

Before joining Covington, Conor worked in digital advertising helping teams develop large consumer data collection and analytics platforms. He uses this experience to advise clients on matters related to digital advertising and advertising technology.

Photo of Grace Howard Grace Howard

Grace Howard is an associate in the firm’s Washington, DC office. She represents and advises clients on a range of cybersecurity, data privacy, and government contracts issues including cyber and data security incident response and preparedness, regulatory compliance, and internal investigations including matters…

Grace Howard is an associate in the firm’s Washington, DC office. She represents and advises clients on a range of cybersecurity, data privacy, and government contracts issues including cyber and data security incident response and preparedness, regulatory compliance, and internal investigations including matters involving allegations of noncompliance with U.S. government cybersecurity regulations and fraud under the False Claims Act.

Prior to joining the firm, Grace served in the United States Navy as a Surface Warfare Officer and currently serves in the U.S. Navy Reserve.

Photo of Irene Kim Irene Kim

Irene Kim is an associate in the firm’s Washington, DC office, where she is a member of the Privacy and Cybersecurity and Advertising and Consumer Protection Investigations practice groups. She advises clients on a broad range of issues, including U.S. state and federal…

Irene Kim is an associate in the firm’s Washington, DC office, where she is a member of the Privacy and Cybersecurity and Advertising and Consumer Protection Investigations practice groups. She advises clients on a broad range of issues, including U.S. state and federal AI legislation, comprehensive state privacy laws, and regulatory compliance matters.

Photo of Evan Chiacchiaro Evan Chiacchiaro

Evan Chiacchiaro is an associate in the firm’s Washington, DC office and member of the Technology and Communications Regulation Practice Group.

Evan advises clients on a range of technology regulatory issues, including emerging artificial intelligence compliance matters and compliance with Federal Communications Commission…

Evan Chiacchiaro is an associate in the firm’s Washington, DC office and member of the Technology and Communications Regulation Practice Group.

Evan advises clients on a range of technology regulatory issues, including emerging artificial intelligence compliance matters and compliance with Federal Communications Commission (FCC) regulations. Evan also maintains an active pro bono practice focused on civil rights.

Photo of Rosie Moss Rosie Moss

Rosie Moss is an associate in the firm’s Washington, DC office. She is a member of the Data Privacy and Cybersecurity Practice Group and the Technology and Communications Regulation Practice Group.

Rosie advises clients on a wide range of data privacy and technology…

Rosie Moss is an associate in the firm’s Washington, DC office. She is a member of the Data Privacy and Cybersecurity Practice Group and the Technology and Communications Regulation Practice Group.

Rosie advises clients on a wide range of data privacy and technology regulatory issues, including emerging artificial intelligence compliance matters. She assists clients in complying with federal and state privacy laws and Federal Communications Commission (FCC) regulations. Rosie also maintains an active pro bono practice.

Photo of Micah Telegen Micah Telegen

Micah Telegen represents clients in complex investigations and enforcement actions, including in the automotive and consumer product industries. He frequently advises clients on compliance with the Motor Vehicle Safety Act, the Consumer Product Safety Act, and other federal safety laws and regulations. Micah…

Micah Telegen represents clients in complex investigations and enforcement actions, including in the automotive and consumer product industries. He frequently advises clients on compliance with the Motor Vehicle Safety Act, the Consumer Product Safety Act, and other federal safety laws and regulations. Micah also regularly advises clients facing regulatory challenges related to emerging technologies in the automotive industry, including connected and autonomous vehicles. He also maintains an active pro bono practice and has experience litigating on behalf of criminal defendants and tenants facing eviction.