Photo of Andrew Longhi

Andrew Longhi

Andrew Longhi advises national and multinational companies across industries on a wide range of regulatory, compliance, and enforcement matters involving data privacy, telecommunications, and emerging technologies.

Andrew's practice focuses on advising clients on how to navigate the rapidly evolving legal landscape of state, federal, and international data protection laws. He proactively counsels clients on the substantive requirements introduced by new laws and shifting enforcement priorities. In particular, Andrew routinely supports clients in their efforts to launch new products and services that implicate the laws governing the use of data, connected devices, biometrics, and telephone and email marketing.

Andrew assesses privacy and cybersecurity risk as a part of diligence in complex corporate transactions where personal data is a key asset or data processing issues are otherwise material. He also provides guidance on generative AI issues, including privacy, Section 230, age-gating, product liability, and litigation risk, and has drafted standards and guidelines for large-language machine-learning models to follow. Andrew focuses on providing risk-based guidance that can keep pace with evolving legal frameworks.

On October 23, the Federal Communications Commission (“FCC”) released a Notice of Inquiry (“NOI”) seeking comment on potential initiatives to address customer service concerns among regulated communications service providers. 

The FCC stated that the goal of the NOI is “to ensure that consumers have appropriate access to the customer services resources they require to interact with their service provider in a manner that allows them to efficiently resolve issues, avoid unnecessary charges, and make informed choices regarding the services they obtain from service providers.”  The inquiry is specific to regulated cable operators, Direct Broadcast Satellite providers, voice service providers, and broadband service providers (collectively referred to as “service providers”).Continue Reading FCC to Examine Customer Service Issues in the Communications Industry

On September 25, the Federal Trade Commission (FTC) announced that it brought five actions against companies it accused of using “artificial intelligence as a way to supercharge deceptive or unfair conduct that harms consumers.”  These actions, which the FTC indicated are part of its new enforcement sweep called “Operation AI Comply,” reflect the FTC’s repeatedly stated intention to exercise its authority under the FTC Act and other rules in connection with AI-related products and marketing claims. 

The five actions rely on a range of FTC authorities and target several different forms of conduct. 

  • DoNotPay: The FTC brought an action against DoNotPay, which purports to offer automated legal services, on the theory that it violated the FTC Act by making false claims that its product could substitute for the expertise of a human lawyer.  A proposed settlement would require DoNotPay to pay $193,000, provide notices to past subscribers, and avoid making claims about its ability to substitute AI for professional expertise without proper evidence.

Continue Reading FTC Announces New Enforcement Actions on Marketing of AI-Enabled Products

On September 26, 2024, the Federal Communications Commission (“FCC”) issued a $6 million fine against political consultant Steve Kramer for “illegal robocalls made using deepfake, AI-generated voice cloning technology and caller ID spoofing to spread election misinformation to potential New Hampshire voters prior to the state’s January primary presidential election.”   The fine follows a $1

Continue Reading FCC Fines Political Consultant $6 Million for AI-based “Deepfake” Robocalls

On September 18, 2024, the Texas Office of the Attorney General (“OAG”) announced that it reached “a first-of-its-kind settlement with a Dallas-based artificial intelligence healthcare technology called Pieces Technologies” (“Pieces”) to resolve “allegations that the company deployed its products at several Texas hospitals after making a series of false and misleading statements about the accuracy and safety of its products.”

According to the press release, “at least four major Texas hospitals have been providing their patients’ healthcare data in real time to Pieces so that its generative AI product can ‘summarize’ patients’ condition and treatment for hospital staff.”  Pieces developed “a series of metrics to claim that its healthcare AI products were ‘highly accurate,’ including advertising and marketing the accuracy of its products and services by claiming an error rate or ‘severe hallucination rate’ of  ‘<1 per 100,000.’”  The OAG claimed that its “investigation found that these metrics were likely inaccurate and may have deceived hospitals about the accuracy and safety of the company’s products” in violation of the Texas Deceptive Trade Practices Act.Continue Reading Healthcare Technology Company Settles Texas Attorney General Allegations Regarding Accuracy of Generative AI Products

On August 16, 2024, the U.S. Department of Transportation (the “USDOT”) announced the Saving Lives with Connectivity: A Plan to Accelerate V2X Deployment plan (the “Plan”). The Plan is intended to “accelerate the deployment” of vehicle-to-everything (“V2X”) technology and support USDOT’s goal of establishing a comprehensive approach to roadway fatality reduction. The Plan states that USDOT is “pursuing a comprehensive approach to reduce the number of roadway fatalities to the only acceptable number: zero.”

The Plan describes V2X technology as technology that “enables vehicles to communicate with each other, with road users such as pedestrians, cyclists, individuals with disabilities, and other vulnerable road users, and with roadside infrastructure, through wirelessly exchanged messages.” Such messages may contain information about vehicles’ location and actions and traffic conditions like weather, pavement conditions, work zones, and more. The Plan notes that currently deployed V2X technology has already demonstrated safety benefits on a small scale and calls for expanded deployment of such technology.

In a press release accompanying the Plan, U.S. Secretary of Transportation Pete Buttigieg said, “The Department has reached a key milestone today in laying out a national plan for the transportation industry that has the power to save lives and transform the way we travel … The Department recognizes the potential safety benefits of V2X, and this plan will move us closer to nationwide adoption of this technology.”Continue Reading USDOT Releases Plan to Accelerate V2X Deployment

This quarterly update highlights key legislative, regulatory, and litigation developments in the second quarter of 2024 related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), and data privacy and cybersecurity. 

I.       Artificial Intelligence

Federal Legislative Developments

  • Impact Assessments: The American Privacy Rights Act of 2024 (H.R. 8818, hereinafter “APRA”) was formally introduced in the House by Representative Cathy McMorris Rodgers (R-WA) on June 25, 2024.  Notably, while previous drafts of the APRA, including the May 21 revised draft, would have required algorithm impact assessments, the introduced version no longer has the “Civil Rights and Algorithms” section that contained these requirements.
  • Disclosures: In April, Representative Adam Schiff (D-CA) introduced the Generative AI Copyright Disclosure Act of 2024 (H.R. 7913).  The Act would require persons that create a training dataset that is used to build a generative AI system to provide notice to the Register of Copyrights containing a “sufficiently detailed summary” of any copyrighted works used in the training dataset and the URL for such training dataset, if the dataset is publicly available.  The Act would require the Register to issue regulations to implement the notice requirements and to maintain a publicly available online database that contains each notice filed.
  • Public Awareness and Toolkits: Certain legislative proposals focused on increasing public awareness of AI and its benefits and risks.  For example, Senator Todd Young (R-IN) introduced the Artificial Intelligence Public Awareness and Education Campaign Act (S. 4596), which would require the Secretary of Commerce, in coordination with other agencies, to carry out a public awareness campaign that provides information regarding the benefits and risks of AI in the daily lives of individuals.  Senator Edward Markey (D-MA) introduced the Social Media and AI Resiliency Toolkits in Schools Act (S. 4614), which would require the Department of Education and the federal Department of Health and Human Services to develop toolkits to inform students, educators, parents, and others on how AI and social media may impact student mental health.

Continue Reading U.S. Tech Legislative, Regulatory & Litigation Update – Second Quarter 2024

On July 24, 2024, the U.S. Court of Appeals for the Fifth Circuit struck down the Federal Universal Service Fund (USF) in Consumers’ Research et al. v. FCC.  In a 9-7 en banc decision, the majority reversed an earlier decision by a three-judge panel and held that the program created by the Federal Communications Commission (FCC) based on provisions in the 1996 Telecommunications Act constitutes an unlawful delegation of taxing power from Congress and thus violates Article I, § 1 of the Constitution.

The USF is a system for subsidizing telecommunications service to low-income households and high-cost areas by assessing telecommunications carriers; it also provides support to schools and libraries as well as rural health care facilities.  USF accomplishes this through four main mechanisms: the High-Cost Program, which provides support to certain telephone companies that serve high-cost areas; the Low Income Support Program, which subsidizes monthly telephone and broadband service for low-income customers; the E-rate Program, which subsidizes the provision of broadband connectivity and Wi-Fi to schools and libraries; and the Rural Health Care Program, which subsidizes the provision of telecommunications services to rural healthcare providers.Continue Reading Fifth Circuit Holds Federal Universal Service Fund Program Unconstitutional, Creates Circuit Split

Updated July 15, 2024.  Originally posted July 11, 2024.

On July 8, 2024, the Federal Communications Commission (FCC) and a group of Internet Service Providers, represented by national and regional trade associations, filed supplemental briefs with the U.S. Court of Appeals for the Sixth Circuit in In re MCP NO. 185. On July 15, the Sixth Circuit granted an administrative stay until August 15, 2024 “[t]o provide sufficient opportunity to consider the merits of the motion.”

The Sixth Circuit is considering challenges to the FCC’s Safeguarding and Securing the Open Internet Order (Open Internet Order), which reclassified broadband Internet access service as a telecommunications service under Title II of the Communications Act of 1934, as amended.  The Order was scheduled to take effect on July 22, 2024, but the ISP representatives asked for a stay.  The Sixth Circuit requested that the parties address the implications of the Supreme Court’s decision to overturn the Chevron Doctrine in Loper Bright Enterprises v. Raimondo for the petitioners’ motion to stay enforcement.Continue Reading Industry Groups and FCC File Briefs in Net Neutrality Case Following Loper Bright

With most state legislative sessions across the country adjourned or winding down without enacting significant artificial intelligence legislation, Colorado and California continue their steady drive to adopt comprehensive legislation regulating the development and deployment of AI systems. 

Colorado

Although Colorado’s AI law (SB 205), which Governor Jared Polis (D) signed into law in May, does not take effect until February 1, 2026, lawmakers have already begun a process for refining the nation’s first comprehensive AI law.  As we described here, the new law will require developers and deployers of “high-risk” AI systems to comply with certain requirements in order to mitigate risks of algorithmic discrimination. 

On June 13, Governor Polis, Attorney General Phil Weiser (D), and Senate Majority Leader Robert Rodriguez (D) issued a public letter announcing a “process to revise” the new law before it even takes effect, and “minimize unintended consequences associated with its implementation.”  The revision process will address concerns that the high cost of compliance will adversely affect “home grown businesses” in Colorado, including through “barriers to growth and product development, job losses, and a diminished capacity to raise capital.”Continue Reading Colorado and California Continue to Refine AI Legislation as Legislative Sessions Wane

On June 10, 2024, the U.S. Supreme Court denied a petition for a writ of certiorari in Consumers’ Research et al. v. Federal Communications Commission et al.  In its petition, the advocacy group Consumers’ Research, along with a small carrier and a five individuals, sought the Supreme Court’s review of the constitutionality of

Continue Reading U.S. Supreme Court Declines to Review Constitutional Challenges to Federal Universal Service Fund Program