On February 6, the Federal Communications Commission (“FCC”) announced that it had sent a letter to Lingo Telecom, LLC (“Lingo”) to demand that Lingo “immediately stop supporting unlawful robocall traffic on its networks.” As background, Lingo is a Texas-based telecommunications provider that, according to the FCC’s letter, was the originating provider for “deepfake” calls made by Life Corp. to New Hampshire voters on January 21, 2024. The calls, which imitated President Biden’s voice and falsified caller ID information, took place two days before the New Hampshire presidential primary and reportedly advised Democratic voters to refrain from voting in the primary.
While the cease-and-desist letter focuses on Lingo, the FCC’s press release states that the New Hampshire State Attorney General’s office issued a cease-and-desist letter to Life Corp. and that the “Anti-Robocall Multistate Litigation Task Force is also expected to issue a similar letter to Life [Corp.].”
The FCC’s letter to Lingo alleges that the calls “intended to confuse the recipient[s] . . . [and] create the false impression that the deepfake voice recording was from President Biden, which could wrongly give a prospective voter the impression that the president of the United States was telling them not to vote in the upcoming New Hampshire primary election.” It also stated that originating providers must “[t]ake affirmative, effective measures to prevent new and renewing customers from using its network to originate illegal calls.”
Relatedly, the FCC also issued a “K4 Order” on the same day to notify all U.S.-based voice service providers that if Lingo fails to effectively mitigate illegal robocall traffic, including the use of deepfakes, within 48 hours of the notice, U.S.-based voice service providers may block voice calls or cease to accept traffic from Lingo without liability under the Communications Act of 1934 or the FCC’s rules.
The FCC’s cease-and-desist letter and accompanying K4 Order highlight the FCC’s recent focus on mitigating the use of artificial intelligence technologies for spam, junk, or other illegal calls. The letter comes just a few weeks after 26 state attorneys general asked the FCC to take the position that any type of artificial intelligence technology “that generates a human voice should be considered an ‘artificial voice’ for purposes of the TCPA.” FCC Chairwoman Jessica Rosenworcel recently announced a proposal to “recognize calls made with AI-generated voices [as] ‘artificial’ voices under the [TCPA].” You can read more about the state attorneys general letter and Chairwoman Rosenworcel’s statement here.