On January 16, the attorneys general of 25 states – including California, Illinois, and Washington – and the District of Columbia filed reply comments to the Federal Communication Commission’s (FCC) November Notice of Inquiry on the implications of artificial intelligence (AI) technology for efforts to mitigate robocalls and robotexts.
The Telephone Consumer Protection Act (TCPA) limits the conditions under which a person may lawfully make a telephone call using “an artificial or prerecorded voice.” The reply comments call on the FCC to take the position that “any type of AI technology that generates a human voice should be considered an ‘artificial voice’ for purposes of the [TCPA].” They further state that a more permissive approach would “act as a ‘stamp of approval’ for unscrupulous businesses seeking to employ AI technologies to inundate consumers with unwanted robocalls for which they did not provide consent[], all based on the argument that the business’s advanced AI technology acts as a functional equivalent of a live agent.”
On January 31, FCC Chairwoman Jessica Rosenworcel announced a proposal to “recognize calls made with AI-generated voices [as] ‘artificial’ voices under the [TCPA].” The Chairwoman explained that the proposed approach would offer “State Attorneys General offices across the country new tools they can use to crack down on these scams and protect consumers.”