On June 22, Texas Governor Greg Abbott (R) signed the Texas Responsible AI Governance Act (“TRAIGA”) (HB 149) into law.  The law, which takes effect on January 1, 2026, makes Texas the second state to enact comprehensive AI consumer protection legislation, following the 2024 enactment of the Colorado AI Act.  Unlike the Colorado AI Act, however, TRAIGA’s AI consumer protection framework sets out categories of “prohibitions on use” of AI that will apply to persons that develop, deploy, or distribute AI systems (as relevant for different sections), while also establishing AI disclosure requirements for healthcare providers and government entities and amending Texas’s biometric and data privacy laws. 

Prohibited Uses of AI.  In contrast to other state AI consumer protection frameworks that focus on risk mitigation for “high-risk” AI use cases, TRAIGA will categorically prohibit the development, deployment, or distribution (as applicable) of AI systems with the “intent” or “sole intent” that the AI system:

  • Incite or encourage self-harm, harm to another person, or criminal activity.
  • Infringe, restrict, or otherwise impair individual rights guaranteed under the U.S. Constitution.
  • Unlawfully discriminate against a protected class in violation of state or federal law.  The law further provides that “disparate impact” is insufficient to show an intent to discriminate for purposes of this prohibition.  Notably, TRAIGA’s prohibition on AI-based unlawful discrimination does not apply to insurance entities and financial institutions.
  • Produce, assist or aid in producing, or distribute (1) visual material depicting child pornography, as prohibited under Section 43.26 of the Texas Penal Code, or (2) deepfake videos depicting intimate imagery or sexual conduct, as prohibited under Section 21.165 of the Texas Penal Code.
  • Engage in “text-based conversations that simulate or describe sexual conduct” while “impersonating or imitating a child younger than 18 years of age.” 

TRAIGA also will prohibit certain government uses of AI, including the use of AI that evaluate or classify persons “with the intent to calculate or assign a social score” and the use of AI “for the purpose of uniquely identifying a specific individual” using biometric or publicly available data collected in violation of state or federal law and without the individual’s consent.

Healthcare & Government AI Disclosure Requirement.  TRAIGA will require healthcare providers that use an AI system “in relation to health care service or treatment” to disclose to patients that they are interacting with an AI system, and to provide such disclosures “not later than the date the service or treatment is first provided.”  The law also will require government agencies to provide such disclosures to consumers that interact with an AI system that is “intended to interact with consumers” and made available by the government agency.

CUBI Amendments.  TRAIGA amends Texas’s Capture or Use of Biometric Identifiers (“CUBI”) law, which generally prohibits the capture of an individual’s biometric identifier for commercial purposes unless the individual provides informed consent.  TRAIGA amends CUBI to clarify that an individual is not informed of and does not consent to the capture of their biometric identifiers based solely on the existence of “publicly available” media that contains their biometric identifiers, unless the media was made publicly available by the individual. 

Additionally, TRAIGA creates an exception to CUBI for the processing of biometric identifiers involved in developing, training, evaluating, disseminating, or otherwise offering AI models or systems, unless the system is used or deployed for the purpose of uniquely identifying a specific individual.  TRAIGA also creates an exception to CUBI for entities that develop or deploy an AI model or system for certain security and fraud prevention purposes.

Data Processor Requirements.  TRAIGA amends the Texas Data Privacy & Security Act to require processors to assist controllers regarding complying with requirements related to personal data collected, stored, and processed by AI systems, if applicable.

Exemptions.  TRAIGA will exempt a defendant from liability under its provisions for alleged violations caused by “another person[’s]” use of the defendant’s AI system, and will prohibit enforcement actions against any person for an AI system “that has not been deployed.”  Additionally, TRAIGA will preclude liability for defendants that discover a violation of TRAIGA through (1) feedback from developers, deployers, or other persons, (2) testing, (3) following state agency guidelines, or (4) an internal review process if the defendant is substantially complaint with the National Institute of Standards and Technology’s AI Risk Management Framework: GenAI Profile or another nationally or internationally recognized AI risk management framework.

Enforcement.  TRAIGA will be enforced by the Texas Attorney General, who will be required to establish an online mechanism for consumers to report TRAIGA violations and authorized to request various categories of information from potential violators.  Violations will be punishable by $10,000 to $12,000 in civil penalties for failures to cure violations, $80,000 to $200,000 in civil penalties for “uncurable” violations, and $2,000 to $40,000 in civil penalties for each day that a violation continues, in addition to injunctive relief.

Upon the Texas Attorney General’s recommendation, Texas state agencies also will be authorized to impose sanctions against persons found in violation of TRAIGA if the person is licensed, registered, or certified by the state agency.  For such persons, state agency sanctions include the suspension or revocation of the person’s agency-issued license and up to $100,000 in monetary penalties.

*              *              *

For more updates on developments related to artificial intelligence and technology, see our Inside Global TechGlobal Policy Watch, and Inside Privacy blogs.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Lindsey Tonsager Lindsey Tonsager

Lindsey Tonsager co-chairs the firm’s global Data Privacy and Cybersecurity practice. She advises clients in their strategic and proactive engagement with the Federal Trade Commission, the U.S. Congress, the California Privacy Protection Agency, and state attorneys general on proposed changes to data protection…

Lindsey Tonsager co-chairs the firm’s global Data Privacy and Cybersecurity practice. She advises clients in their strategic and proactive engagement with the Federal Trade Commission, the U.S. Congress, the California Privacy Protection Agency, and state attorneys general on proposed changes to data protection laws, and regularly represents clients in responding to investigations and enforcement actions involving their privacy and information security practices.

Lindsey’s practice focuses on helping clients launch new products and services that implicate the laws governing the use of artificial intelligence, data processing for connected devices, biometrics, online advertising, endorsements and testimonials in advertising and social media, the collection of personal information from children and students online, e-mail marketing, disclosures of video viewing information, and new technologies.

Lindsey also assesses privacy and data security risks in complex corporate transactions where personal data is a critical asset or data processing risks are otherwise material. In light of a dynamic regulatory environment where new state, federal, and international data protection laws are always on the horizon and enforcement priorities are shifting, she focuses on designing risk-based, global privacy programs for clients that can keep pace with evolving legal requirements and efficiently leverage the clients’ existing privacy policies and practices. She conducts data protection assessments to benchmark against legal requirements and industry trends and proposes practical risk mitigation measures.

Photo of Jayne Ponder Jayne Ponder

Jayne Ponder provides strategic advice to national and multinational companies across industries on existing and emerging data privacy, cybersecurity, and artificial intelligence laws and regulations.

Jayne’s practice focuses on helping clients launch and improve products and services that involve laws governing data privacy…

Jayne Ponder provides strategic advice to national and multinational companies across industries on existing and emerging data privacy, cybersecurity, and artificial intelligence laws and regulations.

Jayne’s practice focuses on helping clients launch and improve products and services that involve laws governing data privacy, artificial intelligence, sensitive data and biometrics, marketing and online advertising, connected devices, and social media. For example, Jayne regularly advises clients on the California Consumer Privacy Act, Colorado AI Act, and the developing patchwork of U.S. state data privacy and artificial intelligence laws. She advises clients on drafting consumer notices, designing consent flows and consumer choices, drafting and negotiating commercial terms, building consumer rights processes, and undertaking data protection impact assessments. In addition, she routinely partners with clients on the development of risk-based privacy and artificial intelligence governance programs that reflect the dynamic regulatory environment and incorporate practical mitigation measures.

Jayne routinely represents clients in enforcement actions brought by the Federal Trade Commission and state attorneys general, particularly in areas related to data privacy, artificial intelligence, advertising, and cybersecurity. Additionally, she helps clients to advance advocacy in rulemaking processes led by federal and state regulators on data privacy, cybersecurity, and artificial intelligence topics.

As part of her practice, Jayne also advises companies on cybersecurity incident preparedness and response, including by drafting, revising, and testing incident response plans, conducting cybersecurity gap assessments, engaging vendors, and analyzing obligations under breach notification laws following an incident.

Jayne maintains an active pro bono practice, including assisting small and nonprofit entities with data privacy topics and elder estate planning.

Photo of August Gweon August Gweon

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks…

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks, and policy trends.

August regularly provides advice to clients on privacy and competition frameworks and AI regulations, with an increasing focus on U.S. state AI legislative developments and trends related to synthetic content, automated decision-making, and generative AI. He also assists clients in assessing federal and state privacy regulations like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in public policy discussions and rulemaking processes.