State lawmakers are considering a diverse array of AI legislation, with hundreds of bills introduced in 2025. As described further in this blog post, many of these AI legislative proposals fall into several key categories: (1) comprehensive consumer protection legislation similar to the Colorado AI Act, (2) sector-specific legislation on automated decision-making, (3) chatbot regulation, (4) generative AI transparency requirements, (5) AI data center and energy usage requirements, and (6) frontier model public safety legislation. Although these categories represent just a subset of current AI legislative activity, they illustrate the major priorities of state legislatures and highlight new AI laws that may be on the horizon.
- Consumer Protection. Lawmakers in over a dozen states have introduced legislation aimed at reducing algorithmic discrimination in high-risk AI or automated decision-making systems used to make “consequential decisions,” embracing the risk- and role-based approach of the Colorado AI Act. In general, these frameworks would establish developer and deployer duties of care to protect consumers from algorithmic discrimination and would require risks or instances of algorithmic discrimination to be reported to state attorneys general. They would also require notices to consumers and disclosures to other parties and establish consumer rights related to the AI system. For example, Virginia’s High-Risk AI Developer & Deployer Act (HB 2094), which follows this approach, passed out of Virginia’s legislature this month.
- Sector-Specific Automated Decision-making. Lawmakers in more than a dozen states have introduced legislation that would regulate the use of AI or automated decision-making tools (“ADMT”) in specific sectors, including healthcare, insurance, employment, and finance. For example, Massachusetts HD 3750 would amend the state’s health insurance consumer protection law to require healthcare insurance carriers to disclose the use of AI or ADMT for reviewing insurance claims and report AI and training data information to the Massachusetts Division of Insurance. Other bills would regulate the use of ADMT in the financial sector, such as New York A773, which would require banks that use ADMT for lending decisions to conduct annual disparate impact analyses and disclose such analyses to the New York Attorney General. Relatedly, state legislatures are considering a wide range of approaches to regulating employers’ uses of AI and ADMT. For example, Georgia SB 164 and Illinois SB 2255 would both prohibit employers from using ADMT to set wages unless certain requirements are satisfied.
- Chatbots. Another key trend in 2025 AI legislation focuses on AI chatbots. For example, Hawaii HB 639 / SB 640, Idaho HB 127, Illinois HB 3021, Massachusetts SD 2223, and New York A222, would either require chatbot providers to provide prominent disclosures to inform users that they are not interacting with a human or impose liability on chatbot providers for misleading or deceptive chatbot communications.
- Generative AI Transparency. State legislatures are also considering legislation to regulate providers of generative AI systems and platforms that host synthetic content. Some of these bills, such as Washington HB 1170, Florida HB 369, Illinois SB 1929, and New Mexico HB 401 would require generative AI providers to include watermarks in AI-generated outputs and provide free AI detection tools for users, similar to the California AI Transparency Act, which passed last year. Other bills, such as Illinois SB 1792 and Utah SB 226, would require generative AI owners, licensees, or operators to display notices to users that disclose the use of generative AI or warn users that AI-generated outputs may be inaccurate, inappropriate, or harmful.
- AI Data Centers & Energy. Lawmakers across the country have introduced legislation to address the growing energy demands of AI development and related environmental concerns. For example, California AB 222 would require data centers to estimate and report to the state the total energy used to develop certain large AI models, and would require covered AI developers to estimate and publish the total energy used to develop each model. Similarly, Massachusetts HD 4192 would require both AI developers and operators of sources of greenhouse gas emissions to monitor, track, and report environmental impacts and mitigations.
- Frontier Model Public Safety. Following the legislature’s passage and Governor’s subsequent veto of California SB 1047 last year, California State Senator Scott Wiener filed SB 53 with the goal of “establish[ing] safeguards for the development of [AI] frontier models.” Lawmakers in other states are also considering legislation to address public safety risks posed by “frontier” or “foundation” models, generally defined as AI models that meet certain computational or monetary thresholds. For example, Illinois HB 3506c would require developers of certain large AI models to conduct risk assessments every 90 days, publish annual third-party audits, and implement foundation model safety and security protocols. As another approach, Rhode Island H 5224 would impose strict liability on developers of covered AI models for all injuries to non-users that are factually and proximately caused by the covered model.
* * *
Although the likelihood of passage for these AI bills remains unclear, any state AI legislation that does pass is likely to have significant effects on the U.S. AI regulatory landscape, especially in the absence of federal action on AI. We will continue to monitor these and related AI developments across our Inside Global Tech, Global Policy Watch, and Inside Privacy blogs.