A New Orleans magician recently made headlines for using artificial intelligence (AI) to emulate President Biden’s voice without his consent in a misleading robocall to New Hampshire voters. This was not a magic trick, but rather a demonstration of the risks AI-generated “deepfakes” pose to election integrity. As rapidly evolving AI capabilities collide with the ongoing 2024 elections, federal and state policymakers increasingly are taking steps to protect the public from the threat of deceptive AI-generated political content.
Media generated by AI to imitate an individual’s voice or likeness present significant challenges for regulators. As deepfakes increasingly become indistinguishable from authentic content, members of Congress, federal regulatory agencies, and third-party stakeholders all have called for action to mitigate the threats deepfakes can pose for elections.
Several federal regulators have taken steps to explore the regulation of AI-generated content within their existing jurisdiction. On February 8, the Federal Communications Commission issued a declaratory ruling confirming that the Telephone Consumer Protection Act restricts the use of “current AI technologies that generate human voices,” an interpretation endorsed by 26 state attorneys general.
Last year, the Federal Election Commission (FEC) took a step toward clarifying whether AI-generated deepfakes might violate the Federal Election Campaign Act’s prohibition on deceptive campaign practices by requesting comment on whether to initiate a rulemaking on the subject. After previously deadlocking on a petition from Public Citizen to open such a rulemaking, the FEC voted unanimously in August 2023 to accept public comment on whether to initiate rulemaking procedures, though the agency has not yet taken further action.
Members of Congress also have introduced several bills regulate deepfakes, though these efforts have moved slowly in committee. Many lawmakers remain determined to make progress on the issue, as senators from both parties expressed in an April Judiciary Subcommittee hearing. In March, Senators Amy Klobuchar (D-MN) and Lisa Murkowski (R-AK) introduced the bipartisan AI Transparency in Elections Act of 2024 to require clear and conspicuous disclosures in certain political communications that were created or materially altered by artificial intelligence. Representatives Anna Eshoo (D-CA) and Neal Dunn (R-FL)—members of the House Bipartisan Task Force on Artificial Intelligence—introduced a more generally applicable deepfake disclosure bill that would also address potential impact of the technology on our elections.
Several states already have enacted prohibitions or disclosure requirements on certain forms of manipulated media related to elections, including Minnesota, Texas, and California. These laws generally prohibit the knowing dissemination of deepfakes within one to three months of an election, and each requires intent to influence the election or the depicted candidate’s reputation.
Even with AI risks top-of-mind for policymakers at all levels, with just seven months until the 2024 general election, a full agenda in Congress, and state legislative sessions coming to a close, the prospects of major reforms in time for this cycle remain uncertain.