This quarterly update highlights key legislative, regulatory, and litigation developments in the first quarter of 2025 related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), and cryptocurrencies and blockchain.
I. Artificial Intelligence
A. Federal Legislative Developments
In the first quarter, members of Congress introduced several AI bills addressing national security, including bills that would encourage the use of AI for border security and drug enforcement purposes. Other AI legislative proposes focused on workforce skills, international investment in critical industries, U.S. AI supply chain resilience, and AI-enabled fraud. Notably, members of Congress from both parties advanced legislation to regulate AI deepfakes and codify the National AI Research Resource, as discussed below.
- Deepfake Regulation: In February, the Senate passed the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks (“TAKE IT DOWN”) Act (S. 146), following its unanimous passage by the Senate in 2024. The Act would prohibit the nonconsensual disclosure of AI-generated intimate imagery and require platforms to remove such content published on the platform. The House version of the TAKE IT DOWN Act (H.R. 633) has been referred to the House Energy & Commerce Committee.
- CREATE AI Act: In March, Reps. Jay Obernolte (R-CA) and Don Beyer (D-VA) re-introduced the Creating Resources for Every American To Experiment with Artificial Intelligence (“CREATE AI”) Act (H.R. 2385), following its introduction and near passage in the Senate last year. The CREATE AI Act would codify the National AI Research Resource (“NAIRR”), with the goal of advancing AI development and innovation by offering AI computational resources, common datasets and repositories, educational tools and services, and AI testbeds to individuals, private entities, and federal agencies. The CREATE AI Act builds on the work of the NAIRR Task Force, established by the National AI Initiative Act of 2020, which issued a final report in January 2023 recommending the establishment of NAIRR.
B. Federal Regulatory Developments
Following President Trump’s return to the White House, the Executive Branch reversed many of the Biden Administration’s AI policies and charted a new course for U.S. AI policy focused on bolstering national security and innovation.
- The White House: The Trump Administration has made significant changes to the White House’s approach to AI. On January 20, President Trump issued Executive Order 14148, revoking President Biden’s 2023 Executive Order 14110 on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” On January 23, President Trump signed Executive Order 14179 on “Removing Barriers to American Leadership in Artificial Intelligence.” Among other things, EO 14179 requires the development of an ”AI Action Plan” to implement its policy of “sustain[ing] and enhanc[ing] America’s global AI dominance.” On February 6, the White House Office of Science & Technology Policy (“OSTP”) issued a Request for Information (RFI) seeking public input on the AI Action Plan required by EO 14179. The RFI sought comment on 20 AI policy topics, including energy consumption, technical and safety standards, and intellectual property. The comment period closed on March 15.
- Department of Commerce: On March 25, the Department of Commerce’s Bureau of Industry and Security (“BIS”) added 80 entities to the Entity List, including entities from China, United Arab Emirates, South Africa, Iran, Taiwan, with the goal of restricting the use of U.S. AI and other technologies for military applications. The announcement follows BIS’s January 13 interim final rule (the “AI Diffusion Rule”), which would expand the Export Administration Regulation (“EAR”)’s controls on the export and transfer of advanced integrated circuits and closed-weight dual-use AI models, and would impose global licensing requirements on AI model weights. The AI Diffusion Rule is scheduled to come into effect on May 15.
- National Institute of Standards and Technology: On February 14, the National Institute of Standards and Technology (“NIST”) announced the creation of a new “Community Profile” to provide risk management guidance related to “Cybersecurity of AI Systems, AI-enabled Cyber Attacks, and AI-enabled Cyber Defense” (the “Cyber AI Profile”), and published a concept paper on the Cyber AI Profile’s risk management approaches. On March 24, NIST released its final report on “Adversarial Machine Learning,” with voluntary guidance on methods for securing AI systems against adversarial manipulations and attacks.
- Federal Trade Commission: On February 11, the FTC issued a final decision and order against DoNotPay, a provider of a “robot lawyer” online subscription service, over allegations that DoNotPay made unsubstantiated claims that its service was an adequate substitute for human lawyers. On March 11, the FTC announced a settlement with Evolv Technologies over allegations that the company made false claims about its AI-powered security screening system. On April 10, the Senate voted, 50-46, to confirm Mark Meador as the newest FTC Commissioner, following the Senate Commerce, Science, and Transportation Committee’s vote to advance his nomination on March 12. During his Committee hearing, Meador stated that the FTC should use its existing consumer protection authorities to address AI-related harms like deepfake pornography, and should “fully enforce the competition laws” to ensure that consumer have choices other than AI platforms with “political bias.” Meador’s confirmation comes after President Trump fired FTC Commissioners Alvaro Bedoya and Rebecca Kelly Slaughter on March 18.
- Copyright Office: On January 29, the U.S. Copyright Office released Part 2 of its report on copyright and AI, focusing on the copyrightability of AI-generated outputs. Among other things, the report found that “questions of copyrightability and AI can be resolved pursuant to existing law” and that copyright protections do “not extend to purely AI-generated material or material where there is insufficient human control over the expressive elements.” The report noted that “whether human contributions to AI-generated outputs are sufficient to constitute authorship must be analyzed on a case-by-case basis.” The Copyright Office released Part 1 of the report, which focused on digital replicas, in July 2024.
- Securities & Exchange Commission: On February 20, the Securities and Exchange Commission (“SEC”) announced the creation of the Cyber and Emerging Technologies Unit (“CETU”) to combat cyber-related misconduct and protect retail investors. According to the SEC, CETU will prioritize enforcement related to “fraud committed using emerging technologies, such as artificial intelligence and machine learning,” among other priority areas.
- Department of Defense: On February 5, BigBear.ai announced a contract with the Department of Defense (“DOD”)’s Chief Digital and AI Office to prototype a system for analyzing geopolitical risks posed by near-peer adversaries, with the goal of enhancing DOD’s assessment of and response to global threats using advanced AI analytics.
C. State Legislative Developments
States began their 2025 legislative sessions by introducing hundreds of new AI bills in the first quarter of 2025, including over a dozen AI bills that have been passed and would address algorithmic discrimination, AI-generated CSAM, intimate imagery, and election-related content, generative AI chatbots, and digital replicas. Additionally, state lawmakers continued to assess new regulations for frontier models, including frontier model legislation in New York, Illinois, Maryland, and California.
- Algorithmic Discrimination & Consumer Protection: In Virginia, the legislature passed, and Governor Glenn Youngkin vetoed, the High-Risk AI Developer & Deployer Act (HB 2094), a similar AI consumer protection bill. In his veto message, Governor Youngkin noted that “there are many laws currently in place that protect consumers and place responsibilities on companies relating to discriminatory practices, privacy, data use, libel, and more,” while warning that HB 2094 would “put[] an especially onerous burden on smaller firms and startups.”
- Synthetic Content Laws: Montana enacted HB 82, prohibiting the possession of AI-generated CSAM or intimate imagery. South Dakota enacted SB 164, prohibiting the dissemination of deepfakes within 90 days of an election. Kentucky enacted SB 4, requiring AI disclosures for political ads that contain AI-generated content.
- Generative AI and Chatbot Laws: In Utah, the Governor signed HB 452, requiring businesses that use “mental health chatbots” to interact with individuals to disclose chatbot interactions and advertisements presented through the chatbot to users. HB 452 also prohibits the sale or sharing of user chatbot inputs or their health information and the use of user chatbot inputs to determine whether to display an advertisement. Utah also enacted SB 226, requiring businesses to disclose interactions with generative AI if prompted or asked by the user, and requiring providers of “regulated services” to prominently and affirmatively disclose generative AI interactions if the interaction involves the collection of sensitive personal information or the provision of personalized recommendations, advice, or information that could be reasonably be relied upon to make significant personal decisions.
- Laws Regulating AI-Generated Impersonations & Digital Replicas: Utah enacted SB 271, prohibiting the non-consensual use of “personal identities,” including reproductions of a person’s likeness, voice, or image created using AI, for commercial purposes if the use expresses or implies the depicted individual’s endorsement, creates a likelihood of confusion about the individual’s association, or creates a false impression that the individual approved of the use. Like Tennessee’s ELVIS Act (enacted March 2024), SB 271 also prohibits the distribution, sale, or licensing of technologies, software, or tools that have the “primary purpose” of creating or modifying unauthorized reproductions of personal identities. Similarly, Arkansas enacted HB 1071, prohibiting the unauthorized commercial use of another person’s image, video, three-dimensional generation, or voice generated through means of AI.
- Laws Imposing Criminal Penalties for Synthetic Content: Virginia enacted HB 2124, which prohibits the use of synthetic digital content for committing crimes involving fraud, slander, or libel. New Jersey enacted A 3540, imposing criminal penalties for the creation of AI-generated audio or visual media with intent to be used as part of any crime or offense.
- Frontier Model Public Safety Legislation: Montana enacted the Right to Compute Act (SB 212), requiring deployers of AI systems that control “critical infrastructure facilities” to develop risk management policies, while also establishing an individual “right to compute.” California Sen. Scott Weiner introduced SB 53, which would establish whistleblower protections for employees of foundation model developers. In March, the Joint California Policy Working Group on AI Frontier Models issued a draft report with several recommendations for frontier model regulation, including transparency requirements, third-party risk assessments, whistleblower protections, and adverse event reporting.
II. Connected & Automated Vehicles
In the first quarter, the outgoing Biden Administration took action to regulate connected vehicles (“CVs”). In light of the new Trump Administration and its January 20 regulatory freeze, however, the future of these actions is uncertain. President Trump has begun appointing regulators with jurisdiction over CVs, including Sean Duffy, who was confirmed as Secretary of Transportation on January 28, and Jonathan Morrison, who was nominated as Administrator of the National Highway and Safety Administration on February 11.
- BIS Connected Vehicle Supply Chain Final Rule: On January 14, BIS released its Final Rule on securing the connected vehicle supply chain. The Final Rule restricts the import or sale of vehicle connectivity hardware and connected vehicles with software made in, owned, or controlled by China or Russia. On January 20, President Trump issued a Presidential Memorandum on the “America First Trade Policy,” directing the Secretary of Commerce to “review and recommend appropriate action” with respect to the Final Rule and consider whether controls on technology transactions should be expanded to additional connected products. On April 3, the White House released an executive summary of these recommendations, which does not mention connected vehicles.
- NHTSA Proposes AV STEP. On January 15, the National Highway Traffic Safety Administration (“NHTSA”) issued a Notice of Proposed Rulemaking for the automated driving system (“ADS”)-equipped Vehicle Safety, Transparency, and Evaluation Program (“AV STEP”), a voluntary program for vehicle manufacturers, ADS developers, fleet operators, and system integrators. AV STEP participants would be required to submit detailed AV-related information to NHTSA and may request exemptions from applicable federal motor vehicle safety standards through a new streamlined process. NHTSA received comments on AV STEP from 33 entities, including trade groups, AV-related companies, and vehicle manufacturers. The comment period closed on March 17.
- FTC Proposed Order Against GM and OnStar: On January 16, the FTC released a proposed order against General Motors (“GM”) and OnStar based on allegations that they collected, used, and sold drivers’ information precise geolocation and driving behavior data without adequately notifying and obtaining consumers’ affirmative consent. The order would ban GM, OnStar, and affiliated companies from disclosing consumers’ sensitive geolocation and driving behavior data to consumer reporting agencies, and require GM and OnStar to provide greater transparency and choice to consumers regarding the collection, use, and disclosure of connected vehicle data.
- The Safe Vehicle Access for Survivors Act: On March 17, Representatives Debbie Dingell (D-MI) and Dan Crenshaw (R-TX) introduced the Safe Vehicle Access for Survivors Act (H.R. 2110), which would establish a process for domestic abuse survivors to request the termination or disabling of connected vehicle services that could be misused by an abuser. The bill is under consideration in the House Energy and Commerce Committee and has 22 cosponsors, including two Republicans.
III. Cryptocurrency & Blockchain
A. Federal Legislative Developments
Members of Congress introduced significant legislation related to cryptocurrencies and blockchain technologies in the first quarter, including bills to regulate stablecoins and digital assets.
- Stablecoins: Members of Congress introduced two significant pieces of legislation concerning stablecoins. On February 4, Senators Bill Hagerty (R-TN), Tim Scott (R-SC), Kirsten Gillibrand (D-NY), and Cynthia Lummis (R-WY) introduced the Guiding and Establishing National Innovation for U.S. Stablecoins (“GENIUS”) Act (S. 394), which would establish a comprehensive federal regulatory framework for stablecoins and allow states to regulate stablecoin issuers with a certain market capitalization if the state regulation is “substantially similar” to the regulatory regime under the bill. The GENIUS Act is currently under consideration in the Senate Banking Committee. On March 26, a bipartisan group of representatives introduced the Stablecoin Transparency and Accountability for a Better Ledger Economy (“STABLE”) Act (H.R. 2392), which would also establish a regulatory framework for dollar-backed stablecoins. In contrast to the GENIUS Act, the STABLE Act would require state regulatory regimes for stablecoin issuers to match the federal standard created under the bill. In their press release, the authors of the STABLE Act expressed their intention to work with Senate colleagues to pass unified legislation. The bill was voted out of the House Financial Services Committee in early April.
B. Federal Regulatory Developments
In the first quarter, the White House and federal agencies took significant steps to reverse the prior Administration’s cryptocurrency and blockchain policies and integrate digital assets into the traditional financial system.
- The White House: In January, President Trump signed Executive Order 14178 on “Strengthening American Leadership in Digital Financial Technology,” revoking the Biden Administration’s 2022 Executive Order 14067 on “Ensuring Responsible Development of Digital Assets” and establishing the Presidential Working Group on Digital Asset Markets. The working group is tasked with proposing a federal regulatory framework for digital assets within 180 days. To spearhead the efforts outlined in EO 14178, President Trump appointed David Sacks as the nation’s first “Crypto Czar,” responsible for coordinating federal policies on cryptocurrencies and blockchain technology. On March 6, President Trump issued Executive Order 14233 on the “Establishment of the Strategic Bitcoin Reserve and United States Digital Asset Stockpile.” The EO directs the Secretary of the Treasury to use lawfully seized cryptocurrencies, such as bitcoin and other digital assets, to establish a Strategic Bitcoin Reserve and a U.S. Digital Asset Stockpile. The EO requires the supply of bitcoin (“BTC”) in the Strategic Bitcoin Reserve to be maintained as a reserve asset and prohibits their sale.
- Federal Deposit Insurance Corporation: On March 28, the Federal Deposit Insurance Corporation (“FDIC”) issued a Financial Institution Letter (FIL-7-2025) rescinding the Biden Administration’s 2022 FIL-16-2022, titled Notification of Engaging in Crypto-Related Activities. The 2022 letter had required state-chartered nonmember banks to obtain pre-approval before engaging in crypto-related activities. Rescinding the pre-approval requirement is expected to encourage FDIC-regulated banks to explore services such as tokenized deposits and cryptocurrency custody, potentially expanding the integration of digital assets into conventional banking.
- Office of the Comptroller of the Currency: On March 7, the Office of the Comptroller of the Currency (“OCC”) issued Interpretive Letter 1183, withdrawing previous guidance that had required OCC approval before banks engaged in crypto-asset activities. The letter emphasizes the need for consistent treatment of bank activities, irrespective of the underlying technology, and signals a more accommodating stance toward blockchain innovations within the banking sector.
- Securities & Exchange Commission: On January 21, Acting SEC Chair Mark Uyeda announced the formation of a Crypto Task Force headed by Commissioner Hester Peirce. Since January, the SEC has dismissed several high-profile lawsuits against cryptocurrency companies, marking a departure from previous enforcement strategies that favored litigation over regulatory guidance. On April 10, the Senate confirmed Paul Atkins to be SEC Chair. During his Senate confirmation hearing, Atkins emphasized that establishing a rational regulatory framework for cryptocurrencies would be a top priority under his leadership.
We will continue to update you on meaningful developments in these quarterly updates and across our blogs.