On May 1, the Connecticut legislature passed an artificial intelligence (“AI”) safety, transparency, and consumer protection bill (“SB 5”). While the Colorado legislature takes steps to streamline existing requirements for developers and deployers of AI systems, Connecticut has passed a multi-part framework that will impose requirements on large frontier developers, operators of AI companions, developers of general purpose models capable of generating synthetic digital content, and developers and deployers of automated employment-related decision technologies. Governor Ned Lamont (D) is expected to sign the bill into law.

Key provisions in the law include the following:

  • Employee Reporting Protections: Frontier developers, defined as any person doing business in Connecticut who intends to train, initiates the training of, or trains a foundation model using, or intending to use, a quantity of computing power greater than 10^26 power integer or floating point operations, will be required to implement reporting procedures and protections for employees. For example, frontier developers will be prohibited from retaliating against employees who report certain public health or safety risks. Further, large frontier developers (frontier developers with more than $500M in revenue) must implement internal reporting processes for employees to report certain public health or safety risks and must provide periodic updates to reporting employees. The Attorney General can enforce this section and seek civil penalties of up to $1,000 per violation.
  • AI Companions. SB 5 will prohibit operators (entities who provide an AI companion to or operate an AI companion for a user) from providing AI companions (i.e., AI with a natural language interface that provides adaptive, human-like responses to user inputs and is able to sustain a relationship across multiple interactions) unless certain conditions are met. For example, operators must implement a protocol that uses evidence-based methods to detect indicators of risk of suicide, self-harm or imminent physical violence and implement reasonable measures to prevent the AI companion from generating outputs that encourage these activities. Additionally, operators must implement reasonable measures to prohibit and prevent an AI companion from claiming to be a human or generating any outputs that refute or conflict with any disclosure that the AI companion is not a human being. Violations of this section will be enforceable by the Attorney General under the state’s unfair and deceptive practices statute.
  • AI Companions Provided to Minors. Operators will be prohibited from providing AI companions to any user that the operator knows, or has reason to believe, is under 18 years of age unless certain protective measures have been put into place that meet or exceed industry standards. For example, such measures must take steps to prevent the AI companion from (a) encouraging the user to harm others, (b) discouraging the user from seeking mental health services from a licensed mental health professional, or (c) discouraging the user from seeking assistance from an appropriate adult. A violation of this section will be enforceable by the Attorney General under the state’s unfair and deceptive practices statute.  
  • Automated Employment-related Decision Technology. The law will create requirements for developers and deployers of automated employment-related decision technology, defined as technology that processes personal data and uses computation to generate any output to make or materially influence an employment-related decision. Among other things, deployers will be required to provide employees and applicants a written notice with information about the use of the technology, and developers will be required to provide all information deployers need to comply with their obligations under the law. This provision of the law will be enforceable by the Attorney General under the state’s unfair and deceptive practices statute.

We will continue to update you on meaningful developments in these quarterly updates and across our blogs

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Jennifer Johnson Jennifer Johnson

Jennifer Johnson is a partner specializing in communications, media and technology matters who serves as Co-Chair of Covington’s Technology Industry Group and its global and multi-disciplinary Artificial Intelligence (AI) and Internet of Things (IoT) Groups. She represents and advises technology companies, content distributors…

Jennifer Johnson is a partner specializing in communications, media and technology matters who serves as Co-Chair of Covington’s Technology Industry Group and its global and multi-disciplinary Artificial Intelligence (AI) and Internet of Things (IoT) Groups. She represents and advises technology companies, content distributors, television companies, trade associations, and other entities on a wide range of media and technology matters. Jennifer has three decades of experience advising clients in the communications, media and technology sectors, and has held leadership roles in these practices for more than twenty years. On technology issues, she collaborates with Covington’s global, multi-disciplinary team to assist companies navigating the complex statutory and regulatory constructs surrounding this evolving area, including product counseling and technology transactions related to connected and autonomous vehicles, internet connected devices, artificial intelligence, smart ecosystems, and other IoT products and services. Jennifer serves on the Board of Editors of The Journal of Robotics, Artificial Intelligence & Law.

Jennifer assists clients in developing and pursuing strategic business and policy objectives before the Federal Communications Commission (FCC) and Congress and through transactions and other business arrangements. She regularly advises clients on FCC regulatory matters and advocates frequently before the FCC. Jennifer has extensive experience negotiating content acquisition and distribution agreements for media and technology companies, including program distribution agreements, network affiliation and other program rights agreements, and agreements providing for the aggregation and distribution of content on over-the-top app-based platforms. She also assists investment clients in structuring, evaluating, and pursuing potential investments in media and technology companies.

Photo of Matthew Shapanka Matthew Shapanka

Matthew Shapanka is a strategic policy and regulatory attorney who helps technology companies and other businesses navigate complex, high-stakes legislative, regulatory, and enforcement matters at the intersection of law and politics. Drawing on 15+ years of experience across private practice, the U.S. Senate…

Matthew Shapanka is a strategic policy and regulatory attorney who helps technology companies and other businesses navigate complex, high-stakes legislative, regulatory, and enforcement matters at the intersection of law and politics. Drawing on 15+ years of experience across private practice, the U.S. Senate, state government, and political campaigns, Matt develops comprehensive policy strategies that identify regulatory risks and position clients to shape policy outcomes.

Public Policy and Regulatory Strategy

Matt serves as a strategic advisor to Fortune 200 companies on emerging technology policy, including artificial intelligence regulation, connected and autonomous vehicles, semiconductors, IoT, and national security matters. He translates complex legal and technical issues into actionable legislative and regulatory strategy, building the policy frameworks and advocacy infrastructure that enable clients to influence policy. He develops policy collateral for federal, state, and international advocacy, coordinates multi-stakeholder coalitions, and represents clients before Congress, federal agencies, and state legislative and regulatory bodies.

His technology policy experience includes securing unprecedented Presidential intervention in the $118 billion Qualcomm-Broadcom transaction (for which Covington was recognized as The American Lawyer 2019 “Dealmakers of the Year”), advising Fortune 200 companies on Bureau of Industry and Security connected vehicle rules, and counseling major internet platforms on autonomous vehicle policy across dozens of states.

Matt leads Covington’s state public policy practice, managing complex multistate legislative and regulatory advocacy campaigns. His state-level work includes securing a last-minute amendment to California’s 2023 money transmitter legislation on behalf of a fintech client and representing major technology companies on state AI, autonomous vehicle, and political advertising compliance matters across dozens of jurisdictions.

Matt rejoined Covington after serving as Chief Counsel for the U.S. Senate Committee on Rules and Administration under Chairwoman Amy Klobuchar (D-MN), where he negotiated the landmark bipartisan Electoral Count Reform Act – legislation that updated presidential election certification procedures for the first time in nearly 140 years. He also oversaw the Committee’s bipartisan January 6th investigation, developing protocols that resulted in unanimous passage of new Capitol security legislation.

Both in Congress and at Covington, Matt has prepared dozens of corporate executives, nonprofit leaders, academics, and presidential nominees for testimony at congressional committee hearings and depositions. He is a skilled legislative drafter and strategist who has composed dozens of bills and amendments introduced in Congress and state legislatures, including many that have been enacted into law.

Election and Political Law Compliance and Enforcement

As a member of Covington’s Chambers-ranked (Band 1) Election and Political Law practice, Matt advises businesses, nonprofits, political committees, candidates, and donors on the full range of federal and state political law compliance matters, including:

Election and campaign finance laws
Lobbying disclosure
Government ethics rules
The SEC Pay-to-Play Rule

He also conducts political law due diligence for M&A transactions, counsels major political funders and donors in compliance and enforcement matters, and represents candidates, ballot measure committees, and donors in election disputes and recounts.

Before law school, Matt served in the administration of former Governor Deval Patrick (D-MA), where he worked on policy, communications, and compliance matters for federal economic recovery funding awarded to the state. He has also staffed federal, state, and local political candidates in Massachusetts and New Hampshire.

Photo of Jayne Ponder Jayne Ponder

Jayne Ponder provides strategic advice to national and multinational companies across industries on existing and emerging data privacy, cybersecurity, and artificial intelligence laws and regulations.

Jayne’s practice focuses on helping clients launch and improve products and services that involve laws governing data privacy…

Jayne Ponder provides strategic advice to national and multinational companies across industries on existing and emerging data privacy, cybersecurity, and artificial intelligence laws and regulations.

Jayne’s practice focuses on helping clients launch and improve products and services that involve laws governing data privacy, artificial intelligence, sensitive data and biometrics, marketing and online advertising, connected devices, and social media. For example, Jayne regularly advises clients on the California Consumer Privacy Act, Colorado AI Act, and the developing patchwork of U.S. state data privacy and artificial intelligence laws. She advises clients on drafting consumer notices, designing consent flows and consumer choices, drafting and negotiating commercial terms, building consumer rights processes, and undertaking data protection impact assessments. In addition, she routinely partners with clients on the development of risk-based privacy and artificial intelligence governance programs that reflect the dynamic regulatory environment and incorporate practical mitigation measures.

Jayne routinely represents clients in enforcement actions brought by the Federal Trade Commission and state attorneys general, particularly in areas related to data privacy, artificial intelligence, advertising, and cybersecurity. Additionally, she helps clients to advance advocacy in rulemaking processes led by federal and state regulators on data privacy, cybersecurity, and artificial intelligence topics.

As part of her practice, Jayne also advises companies on cybersecurity incident preparedness and response, including by drafting, revising, and testing incident response plans, conducting cybersecurity gap assessments, engaging vendors, and analyzing obligations under breach notification laws following an incident.

Jayne maintains an active pro bono practice, including assisting small and nonprofit entities with data privacy topics and elder estate planning.

Photo of Vanessa Lauber Vanessa Lauber

Vanessa Lauber is an associate in the firm’s New York office and a member of the Data Privacy and Cybersecurity Practice Group, counseling clients on data privacy and emerging technologies, including artificial intelligence.

Vanessa’s practice includes partnering with clients on compliance with federal…

Vanessa Lauber is an associate in the firm’s New York office and a member of the Data Privacy and Cybersecurity Practice Group, counseling clients on data privacy and emerging technologies, including artificial intelligence.

Vanessa’s practice includes partnering with clients on compliance with federal and state privacy laws and FTC and consumer protection laws and guidance. Additionally, Vanessa routinely counsels clients on drafting and developing privacy notices and policies. Vanessa also advises clients on trends in artificial intelligence regulations and helps design governance programs for the development and deployment of artificial intelligence technologies across a number of industries.

Photo of Andrew Siegel Andrew Siegel

Andrew Siegel defends clients in FTC, DOJ, and State AG consumer protection investigations and enforcement actions, including against allegations relating to advertising and marketing practices, subscription autorenewals, and unfair and deceptive trade practices.

Andrew has extensive experience representing clients across industries, including in…

Andrew Siegel defends clients in FTC, DOJ, and State AG consumer protection investigations and enforcement actions, including against allegations relating to advertising and marketing practices, subscription autorenewals, and unfair and deceptive trade practices.

Andrew has extensive experience representing clients across industries, including in the technology, consumer products, and financial services sectors, in high-stakes government investigations by federal and state regulators. He defends clients against allegations relating to the marketing of online subscriptions, the use of algorithms and artificial intelligence, undisclosed endorsements, claim substantiation, and other unfair and deceptive practices. He also counsels clients on proactive compliance with FTC and state regulations governing consumer interactions.

In addition, Andrew advises clients on the protection of customer information and other sensitive data as they respond to demands from U.S. and international law enforcement agencies and government regulators, as well as private plaintiffs. Andrew assists clients in navigating U.S. and international data privacy requirements as they respond to federal grand jury subpoenas, international legal demands, and discovery requests.

Photo of Evan Chiacchiaro Evan Chiacchiaro

Evan Chiacchiaro is an associate in the firm’s Washington, DC office and member of the Technology and Communications Regulation Practice Group.

Evan advises clients on a range of technology regulatory issues, including emerging artificial intelligence compliance matters and compliance with Federal Communications Commission…

Evan Chiacchiaro is an associate in the firm’s Washington, DC office and member of the Technology and Communications Regulation Practice Group.

Evan advises clients on a range of technology regulatory issues, including emerging artificial intelligence compliance matters and compliance with Federal Communications Commission (FCC) regulations. Evan also maintains an active pro bono practice focused on civil rights.