Artificial Intelligence (AI)

On July 29, 2024, the American Bar Association (“ABA”) Standing Committee on Ethics and Professional Responsibility released its first opinion regarding attorneys’ use of generative artificial intelligence (“GenAI”).  The opinion, Formal Opinion 512 on Generative Artificial Intelligence Tools (the “Opinion”), generally confirms what many have assumed: GenAI can be a valuable tool to enhance efficiency in the practice of law, but attorneys utilizing GenAI must be cognizant of the effect that the tool has on their ethical obligations, including their duties to provide competent legal representation and to protect client information.Continue Reading ABA Publishes First Opinion on the Use of Generative AI in the Legal Profession

This quarterly update highlights key legislative, regulatory, and litigation developments in the second quarter of 2024 related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), and data privacy and cybersecurity. 

I.       Artificial Intelligence

Federal Legislative Developments

  • Impact Assessments: The American Privacy Rights Act of 2024 (H.R. 8818, hereinafter “APRA”) was formally introduced in the House by Representative Cathy McMorris Rodgers (R-WA) on June 25, 2024.  Notably, while previous drafts of the APRA, including the May 21 revised draft, would have required algorithm impact assessments, the introduced version no longer has the “Civil Rights and Algorithms” section that contained these requirements.
  • Disclosures: In April, Representative Adam Schiff (D-CA) introduced the Generative AI Copyright Disclosure Act of 2024 (H.R. 7913).  The Act would require persons that create a training dataset that is used to build a generative AI system to provide notice to the Register of Copyrights containing a “sufficiently detailed summary” of any copyrighted works used in the training dataset and the URL for such training dataset, if the dataset is publicly available.  The Act would require the Register to issue regulations to implement the notice requirements and to maintain a publicly available online database that contains each notice filed.
  • Public Awareness and Toolkits: Certain legislative proposals focused on increasing public awareness of AI and its benefits and risks.  For example, Senator Todd Young (R-IN) introduced the Artificial Intelligence Public Awareness and Education Campaign Act (S. 4596), which would require the Secretary of Commerce, in coordination with other agencies, to carry out a public awareness campaign that provides information regarding the benefits and risks of AI in the daily lives of individuals.  Senator Edward Markey (D-MA) introduced the Social Media and AI Resiliency Toolkits in Schools Act (S. 4614), which would require the Department of Education and the federal Department of Health and Human Services to develop toolkits to inform students, educators, parents, and others on how AI and social media may impact student mental health.

Continue Reading U.S. Tech Legislative, Regulatory & Litigation Update – Second Quarter 2024

On Thursday, July 25, the Federal Communications Commission (FCC) released a Notice of Proposed Rulemaking (NPRM) proposing new requirements for radio and television broadcasters and certain other licensees that air political ads containing content created using artificial intelligence (AI).  The NPRM was approved on a 3-2 party-line vote and comes in the wake of an announcement made by FCC Chairwoman Jessica Rosenworcel earlier this summer about the need for such requirements, which we discussed here

At the core of the NPRM are two proposed requirements.  First, parties subject to the rules would have to announce on-air that a political ad (whether a candidate-sponsored ad or an “issue ad” purchased by a political action committee) was created using AI.  Second, those parties would have to include a note in their online political files for political ads containing AI-generated content disclosing the use of such content.  Additional key features of the NPRM are described below.Continue Reading FCC Proposes Labeling and Disclosure Rules for AI-Generated Content in Political Ads

On 12 July 2024, EU lawmakers published the EU Artificial Intelligence Act (“AI Act”), a first-of-its-kind regulation aiming to harmonise rules on AI models and systems across the EU. The AI Act prohibits certain AI practices, and sets out regulations on “high-risk” AI systems, certain AI systems that pose transparency risks, and general-purpose AI (“GPAI”) models.

The AI Act’s regulations will take effect in different stages.  Rules regarding prohibited practices will apply as of 2 February 2025; obligations on GPAI models will apply as of 2 August 2025; and both transparency obligations and obligations on high-risk AI systems will apply as of 2 August 2026.  That said, there are exceptions for high-risk AI systems and GPAI models already placed on the market:  Continue Reading EU Artificial Intelligence Act Published

With most state legislative sessions across the country adjourned or winding down without enacting significant artificial intelligence legislation, Colorado and California continue their steady drive to adopt comprehensive legislation regulating the development and deployment of AI systems. 

Colorado

Although Colorado’s AI law (SB 205), which Governor Jared Polis (D) signed into law in May, does not take effect until February 1, 2026, lawmakers have already begun a process for refining the nation’s first comprehensive AI law.  As we described here, the new law will require developers and deployers of “high-risk” AI systems to comply with certain requirements in order to mitigate risks of algorithmic discrimination. 

On June 13, Governor Polis, Attorney General Phil Weiser (D), and Senate Majority Leader Robert Rodriguez (D) issued a public letter announcing a “process to revise” the new law before it even takes effect, and “minimize unintended consequences associated with its implementation.”  The revision process will address concerns that the high cost of compliance will adversely affect “home grown businesses” in Colorado, including through “barriers to growth and product development, job losses, and a diminished capacity to raise capital.”Continue Reading Colorado and California Continue to Refine AI Legislation as Legislative Sessions Wane

A.    Starting point in Germany

Why is the classification of employees relevant? In Germany, this has considerable consequences: These range from the applicability of employee protection standards (the classic: protection against dismissal) to potential criminal law consequences for the client who turns out to be the employer and has not paid social security contributions. Compliance

Continue Reading EU rules on platform work (“crowdwork directive”) – who is an employee?

Tomorrow, the Federal Senate of the Brazilian National Congress may have its first vote on the country’s new artificial intelligence (AI) legal framework, which takes a human rights, risk management, and transparency approach.

The bill, to be marked-up by the Senate Temporary Committee on Artificial Intelligence (“CTIA”), creates a broad and detailed legal framework. It contains rules on the rights of affected persons and groups, risk categorization and management, governance of AI systems, civil liability, penalties for non-compliance, and copyright protection. It also includes specific provisions for government use of AI, best practices and self-regulation, and communication of serious security incidents. Finally, it establishes an inter-agency regulatory system at the federal government level, whose main regulator will be chosen by the executive branch.Continue Reading Key Vote Expected on Brazil’s Artificial Intelligence Legal Framework

The Federal Communications Commission (FCC) recently adopted two Notices of Apparent Liability (NALs) in connection with its investigation into AI-based “deepfake” calls made to New Hampshire voters on January 21, 2024.  The NALs follow a cease-and-desist letter sent on February 6 to Lingo Telecom, LLC (Lingo), a voice service provider that originated the calls, demanding that it stop originating unlawful robocall traffic on its network, which we previously blogged about here.Continue Reading FCC Proposes Fines for AI-based “Deepfake” Robocalls Before New Hampshire Primary

On May 17, 2024, the Council of Europe adopted the Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (the “Convention”).  The Convention represents the first international treaty on AI that will be legally binding on the signatories.  The Convention will be open for signature on September 5, 2024. 

The Convention was drafted by representatives from the 46 Council of Europe member states, the European Union and 11 non-member states (Argentina, Australia, Canada, Costa Rica, the Holy See, Israel, Japan, Mexico, Peru, the United States of America, and Uruguay).  The Convention is not directly applicable to businesses – it requires the signatories (the “CoE signatories”) to implement laws or other legal measures to give it effect.  The Convention represents an international consensus on the key aspects of AI legislation that are likely to emerge among the CoE signatories.Continue Reading Council of Europe Adopts International Treaty on Artificial Intelligence