The field of artificial intelligence (“AI”) is at a tipping point. Governments and industries are under increasing pressure to forecast and guide the evolution of a technology that promises to transform our economies and societies. In this series, our lawyers and advisors provide an overview of the policy approaches and regulatory frameworks for AI in jurisdictions around the world. Given the rapid pace of technological and policy developments in this area, the articles in this series should be viewed as snapshots in time, reflecting the current policy environment and priorities in each jurisdiction.
The following article examines the state of play in AI policy and regulation in the United States. The previous article in this series covered the European Union.
Future of AI Policy in the U.S.
U.S. policymakers are focused on artificial intelligence (AI) platforms as they explode into the mainstream. AI has emerged as an active policy space across Congress and the Biden Administration, as officials scramble to educate themselves on the technology while crafting legislation, rules, and other measures to balance U.S. innovation leadership with national security priorities.
Over the past year, AI issues have drawn bipartisan interest and support. House and Senate committees have held nearly three dozen hearings on AI this year alone, and more than 30 AI-focused bills have been introduced so far this Congress. Two bipartisan groups of Senators have announced separate frameworks for comprehensive AI legislation. Several AI bills—largely focused on the federal government’s internal use of AI—have also been voted on and passed through committees.
Meanwhile, the Biden Administration has announced plans to issue a comprehensive executive order this fall to address a range of AI risks under existing law. The Administration has also taken steps to promote the responsible development and deployment of AI systems, including securing voluntary commitments regarding AI safety and transparency from 15 technology companies.
Despite strong bipartisan interest in AI regulation, commitment from leaders of major technology companies engaged in AI R&D, and broad support from the general public, passing comprehensive AI legislation remains a challenge. No consensus has emerged around either substance or process, with different groups of Members, particularly in the Senate, developing their own versions of AI legislation through different procedures. In the House, a bipartisan bill would punt the issue of comprehensive regulation to the executive branch, creating a blue-ribbon commission to study the issue and make recommendations.
I. Major Policy & Regulatory Initiatives
Three versions of a comprehensive AI regulatory regime have emerged in Congress – two in the Senate and one in the House. We preview these proposals below.
A. SAFE Innovation: Values-Based Framework and New Legislative Process
In June, Senate Majority Leader Chuck Schumer (D-NY) unveiled a new bipartisan proposal—with Senators Martin Heinrich (D-NM), Todd Young (R-IN), and Mike Rounds (R-SD)—to develop legislation to promote and regulate artificial intelligence. Leader Schumer proposed a plan to boost U.S. global competitiveness in AI development, while ensuring appropriate protections for consumers and workers.
Leader Schumer and his working group announced the SAFE Innovation Framework, five policy principles designed to encourage domestic AI innovation while ensuring adequate guardrails to protect national security, democracy, and public safety. These principles include:
- Security: Protect national security and promote economic security for workers by addressing threat of job displacement.
- Accountability: Ensure transparent and responsible AI systems and hold accountable those who promote misinformation, engage in bias, or infringe IP.
- Foundations: Support development of algorithms and guardrails that protect democracy and promote foundational American values, including liberty, civil rights, and justice.
- Explainability: Regulations should require disclosures from AI developers to educate the public about AI systems, data, and content.
- Innovation: Regulations must promote U.S. global technology leadership.
Procedurally, Leader Schumer argued that the complexities of evolving technology require education of policymakers beyond the traditional committee hearing process. Instead, he announced that he would convene a series of AI Insight Forums—closed-door sessions with Senators and AI experts, including industry leaders, interest groups, AI developers, and other stakeholders.
While Leader Schumer emphasized that the Insight Forums would not replace traditional congressional committee hearings and markups, he said that those tools alone are insufficient to create the “right policies.”
The first AI Insight Forum was held on September 13, featuring civil rights, labor groups, and the creative community, as well as the leaders of major technology companies engaged in AI R&D.
Leader Schumer said that the process has no fixed timeline, but that he expects to draft legislation within the next few months.
B. Licensing Framework
Separate from Leader Schumer’s effort, Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO), the chair and ranking member of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, announced their own framework for AI regulation in September. The Blumenthal-Hawley approach focuses on transparency and accountability to address potential harms of AI and protect personal data of consumers.
Unlike the SAFE Innovation framework, which aims to develop consensus legislation based on guiding principles, the Blumenthal-Hawley framework proposes several specific policies alongside broad principles, drawing on the multiple AI-related hearings the two senators have held in the Privacy Subcommittee this year. Specifically, this consumer privacy-focused framework would:
- Create an independent oversight body to administer a registration and licensing process for companies developing “sophisticated general purpose AI models” and models to be used in certain “high risk situations.”
- Eliminate Section 230 immunity for AI-generated content. This proposal follows legislation Senators Blumenthal and Hawley introduced in June, the No Section 230 Immunity for AI Act, which would deny section 230 immunity to internet platforms for damages from AI-generated content.
- Increase National Security protections, including export controls, sanctions, and other restrictions to prevent foreign adversaries from obtaining advanced AI technologies.
- Promote transparency, including requiring AI developers to disclose training data and other key information to users and other stakeholders, requiring disclaimers when users are interacting with AI systems, and publicly disclosing adverse incidents or AI system failures.
- Protect consumers, including increased control over personal data used in AI systems and strict limitations on generative AI involving children.
Senators Blumenthal and Hawley said they will develop legislation to implement the framework by the end of this year.
C. Blue-Ribbon Commission
While the Senate engages in legislative fact-finding and drafting of concrete proposals based on “frameworks,” a bipartisan group of House members have introduced legislation to adopt an alternative approach. The National AI Commission Act (H.R. 4223)—introduced in June by Representatives Ted Lieu (D-CA), Ken Buck (R-CO), and Anna Eshoo (D-CA) and five additional colleagues (2 Republicans and 3 Democrats)—would establish a bipartisan commission of experts with backgrounds in computer science or AI technology, civil society, industry and workforce issues, and government (including national security) to “review the United States’ current approach to AI regulation,” make recommendations for a risk-based AI regulatory framework and the structures necessary to implement them.
The President and congressional leaders would appoint 20 members to the commission, with each political party selecting half of the members. Once all members of the commission are appointed, the commission would have to release an interim report within six months, a final report six months after the interim report, and a follow-up report one year after the final report.
While Senator Brian Schatz (D-HI) joined the House press release announcing the introduction of the bill, a Senate companion has not been formally introduced.
D. Targeted Bipartisan Legislation
In addition to the bipartisan frameworks, several other bipartisan AI bills on targeted subject matter have been introduced, some of which have advanced through the committee process. Subject-specific bills generally fall into six major categories: (1) promoting AI R&D leadership, (2) protecting national security, (3) disclosure; (4) guarding against the use of AI-generated “deepfakes” in elections and artistic performances; (5) workforce training, and (6) coordinating and facilitating federal agency AI use.
1. Promoting AI R&D Leadership
Members in both Houses have introduced legislation to promote U.S. leadership in AI R&D. The Creating Resources for Every American to Experiment (CREATE) with Artificial Intelligence Act (CREATE AI Act) (S. 2714/H.R. 5077) is bipartisan, bicameral legislation led by Senators Martin Heinrich (D-NM), Todd Young (R-IN), Cory Booker (D-NJ), and Mike Rounds (R-SD) and Representatives Anna Eshoo (D-CA), Michael McCaul (R-TX), Don Beyer (D-VA), and Jay Obernolte (R-CA)—that would establish the National Artificial Intelligence Research Resource (NAIRR). The NAIRR would provide software, data, tools and services, AI testbeds, and other resources to facilitate AI research by higher education institutions, non-profits, and other federal funding recipients.
2. Protecting National Security
Several bipartisan bills have been introduced to require government agencies to prepare for health crises or cyber attacks facilitated by AI and other emerging technologies. These include:
- The Block Nuclear Launch by Autonomous Artificial Intelligence Act (S. 1394/H.R. 2894)—introduced by Senators Ed Markey (D-MA), Elizabeth Warren (D-MA), Jeff Merkley (D-OR), and Bernie Sanders (I-VT), and Representatives Ted Lieu (D-CA), Ken Buck (R-CO), Don Beyer (D-VA), Jim McGovern (D-MA), and Jill Tokuida (D-HI)—would prohibit the use of federal funds to use any AI or other autonomous system to launch a nuclear weapon or select or engage targets of a nuclear weapon, without “meaningful human control.”
- The Artificial Intelligence and Biosecurity Risk Assessment Act (S. 2399/H.R. 4704), introduced by Senators Ed Markey (D-MA) and Ted Budd (R-NC), and Representatives Anna Eshoo (D-CA) and Dan Crenshaw (R-TX) would require the Health and Human Services Department to conduct risk assessments and implement strategies to address threat posed to public health and national security by AI and other technology advancements.
- Senator Richard Blumenthal (D-CT) and Representatives Michael McCaul (R-TX), Gregory Meeks (D-NY), Jared Moskowitz (D-FL), Thomas Kean (R-NJ) and Del. Aumua Amata Coleman Radewagen (R-American Samoa) introduced a bill (S. 1394/H.R. 2894) in February to require the State Department to report to Congress on efforts to implement the advanced capabilities component of the trilateral security partnership between Australia, the United Kingdom, and the United States, including on advanced capabilities such as artificial intelligence. The bill passed the House in March, 393-4 (under suspension of the rules), but remains in the Foreign Relations Committee in the Senate.
- The AI for National Security Act (H.R. 1718), introduced by Representatives Jay Obernolte (R-CA), Jimmy Panetta (D-CA), and Patrick Ryan (D-NY), would update Defense Department procurement laws to allow the procurement of AI-enabled cybersecurity measures.
Several bills have also been introduced to require disclosure of AI-generated products, through a disclaimer requirement or other markings. Bipartisan disclosure measures include the AI Labeling Act (S. 2691), a bipartisan bill introduced by Senators Brian Schatz (D-HI) and John Kennedy (R-LA) that would require all generative AI systems to include a “clear and conspicuous disclosure” that, to the extent feasible is “permanent and unable to be easily removed by subsequent users,” identifies content as AI-generated.
4. Guarding against “Deepfakes”
The growth of AI has stoked fear of “deepfakes”—AI-generated audiovisual content that appropriates the voice and likeness of individuals without their consent—particularly in elections and artistic pursuits. Political campaigns and foreign actors, for example, could use AI systems to generate “deepfake” images or videos to influence elections.
Speaking at a recent Senate Rules Committee hearing on AI and elections, Leader Schumer emphasized the importance of AI guardrails to protect democracy, and committed to ensuring elections are a focus of a future AI Insight Forum. Election-related AI legislation already introduced includes:
- The Protect Elections from Deceptive AI Act (S. 2770), led by Senators Amy Klobuchar (D-MN), Josh Hawley (R-MO), Chris Coons (D-DE), and Susan Collins (R-ME) that would prohibit the distribution of materially deceptive AI-generated content in ads related to a federal election. The bill would also allow targeted candidates to seek removal of the content and recover damages.
- The Require the Exposure of AI-Led (REAL) Political Advertisements Act (S. 1596/H.R. 3044)—sponsored by Senators Amy Klobuchar (D-MN), Cory Booker (D-NJ), and Michael Bennet (D-CO), and Representative Yvette Clarke (D-NY)—which would require all political ads that include AI-generated content to display a disclaimer identifying content as AI-generated.
Lawmakers are also concerned about the use of AI in art and advertising, such as unauthorized celebrity endorsements of products, or AI-generated music featuring the voices of specific artists without their consent. Earlier this month, Senators Chris Coons (D-DE), Marsha Blackburn (R-TN), Amy Klobuchar (D-MN), and Thom Tillis (R-NC) released a discussion draft of their Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act, which would impose liability on persons or companies who generate unauthorized digital reproductions of any person engaged in a performance, as well as on platforms hosting such content if they have knowledge that the content was not authorized by the subject.
Members in both parties are concerned about the impact of AI systems on the American workforce. One bipartisan House bill, the Jobs of the Future Act (H.R. 4498)—introduced by Representatives Darren Soto (D-FL), Lori Chavez-DeRemer (R-OH), Lisa Blunt Rochester (D-DE), and Andrew Garbarino (R-NY)—would require the Labor Department and the National Science Foundation (NSF) to draft a report for Congress analyzing the impact of AI on American workers.
6. Coordinating and Facilitating Federal Agency AI Use
Several bipartisan bills, including bills that have passed committee, relate to the federal government’s use of AI for its own purposes, either to facilitate services or to advise the public when an agency may use AI systems. These include:
- The AI LEAD Act (S. 2293), sponsored by Senators Gary Peters (D-MI) and John Cornyn (R-TX) would establish the position of Chief Artificial Intelligence Officer at each federal agency, who would “ensure the responsible research, development, acquisition, application, governance, and use” of AI by the agency. The bill passed the Senate Homeland Security and Governmental Affairs Committee (HSGAC) in July, but it has not yet been considered on the Senate floor.
- The AI Leadership Training Act (S. 1564), sponsored by Senators Gary Peters (D-MI) and Mike Braun (R-IN), would require the Office of Personnel Management to establish an AI training program for federal agency management and supervisory employees. This bill passed out of HSGAC in May, but has not been considered on the Senate floor.
- The AI Training Expansion Act (H.R. 4503), sponsored by Representatives Nancy Mace (R-SC) and Gerald Connolly (D-VA), would expand AI training within the executive branch. The bill passed the House Oversight and Accountability Committee in July on a bipartisan 39-2 vote, but has not been considered on the floor.
- The Transparent Automated Governance Act (S. 1865), introduced by Senators Gary Peters (D-MI), Mike Braun (R-IN), and James Lankford (R-OK), would require federal agencies to notify individuals whenever they are interacting with AI or other automated systems, or where such systems are making critical decisions. The bill would also create an appeals process to ensure human-review of AI-generated decisions.
- The Consumer Safety Technology Act (H.R. 4814), a partisan bill—led by Representatives Darren Soto (D-FL), Michael Burgess (R-TX), Lori Trahan (D-MA), and Brett Guthrie (R-KY)—that would require the Consumer Products Safety Commission to establish a pilot program for exploring the use of AI to support its mission.
III. What’s Next?
A. Legislative Outlook
Without an emerging legislative consensus, the future of comprehensive AI legislation remains uncertain. However, more than a dozen bipartisan bills have been introduced on a range of specific AI-related topics in both chambers of Congress. Targeted legislation introduced so far includes bills to promote U.S. leadership in AI R&D, to protect national security, to compel disclosure of AI use, to secure U.S. elections from deepfakes and other AI-generated misinformation, address the impact of AI on U.S. workers, and help the federal government leverage AI to deliver services. With bipartisan support and widespread interest in AI issues, it is likely that at least some targeted AI legislation could become law in the near future.
B. Executive Branch Developments
As Congress develops comprehensive AI legislation through hearings and working groups and advances narrower AI bills, the Biden Administration has taken concrete steps toward AI regulation using both existing legal authorities and the bully pulpit to address AI issues and promote responsible AI development and deployment.
President Biden is expected to issue a comprehensive executive order addressing AI risks in the coming weeks. While the Administration has not released details of its anticipated order, Dr. Arati Prabhakar, Director of the White House Office of Science and Technology Policy (OSTP), appearing at a September event on Building Responsible AI sponsored by the Information Technology Industry Council (ITI), said that the order will be “broad” and will reflect “everything that the President sees as possible under existing law to get better at managing risk and using the technology.”
Separately, the White House has been leading a months-long initiative to secure voluntary commitments from AI companies to mitigate risks, including commitments to safety testing and information sharing, investments in cybersecurity safeguards, and transparency. Fifteen major technology companies have taken the White House pledge so far (seven in July, followed by eight more in September).
The National Telecommunications and Information Administration (NTIA) is taking an active role in studying and developing policy recommendations for AI accountability. Most notably, in April 2023 it issued a request for comment (“RFC”) asking stakeholders to suggest policies the Administration can advance to assure the public that AI systems are “legal, effective, safe, and otherwise trustworthy.” NTIA’s work in this area has attracted significant public input and attention, with the agency receiving more than 1,400 comments in response to the RFC. NTIA has explained that it will use these comments and other inputs to inform the agency’s forthcoming report making policy recommendations for “mechanisms that can create earned trust in AI systems.”
Following a directive from Congress (section 5301 of the NDAA for FY2021), the National Institute of Standards and Technology (NIST) released its AI Risk Management Framework (AI RMF 1.0) in January 2023. The AI RMF is voluntary guidance for public and private organizations designed to provide “standards, guidelines, best practices, methodologies, procedures, and processes” for developing trustworthy AI systems, assessing those systems, and mitigating risks from AI systems. NIST collaborated with both government and private stakeholders to develop the framework, including several rounds of public comment.
Other agencies across the Executive Branch are engaged in efforts to regulate AI systems, advance U.S. leadership in AI innovation, and enforce existing laws in the evolving AI ecosystem. While agency initiatives are constantly evolving, some significant actions the Administration has taken in 2023 so far include:
- In February, the U.S. Patent and Trademark Office (USPTO) issued a request for comment seeking public “input on the current state of AI technologies and inventorship issues that may arise in view of the advancement of such technologies, especially as AI plays a greater role in the innovation process.” The USPTO received 69 comments in response to the request, including on a range of questions about the use of AI in invention.
- In April, four federal agencies—the Consumer Financial Protection Bureau, the Justice Department, the Equal Employment Opportunity Commission, and the Federal Trade Commission released a joint statement on their commitment to using existing law to prevent bias and discrimination in AI, describing how AI falls within these agencies’ civil rights enforcement authorities. The agencies “pledge to vigorously use our collective authorities to protect individuals’ rights regardless of whether legal violations occur through traditional means or advanced technologies.”
- In May, the Department of Education released a report on the risks and opportunities related to AI in teaching, research, and assessment.
- Also in May, NSF announced $140 million in funding to launch new National AI Research Institutes focused on six major research areas, including trustworthy AI, AI for cybersecurity, and AI for “smart climate” applications.
- The Commerce Department’s National AI Advisory Committee delivered its first report to President Biden in May.
- In July, the NIST launched a new public working group to build on the AI RMF.
- In August, the U.S. Copyright Office issued a “notice of inquiry” seeking public comment on fair use issues and status of AI outputs to “help assess whether legislative or regulatory steps in this area are warranted.” In September, the Office extended the deadline for initial comments to October 30 and reply comments to November 29.
- In August, the Federal Election Commission published a notice seeking public comment on whether to start a rulemaking related on regulation of AI in campaign advertisements.
These actions are not an exhaustive list of measures the Administration has taken so far to address AI. Other agencies have also taken steps to use existing funding streams to invest in AI R&D, to issue reports or solicit public comments on AI issues within their jurisdiction, to bring enforcement actions against AI companies for violations of existing law, and other actions. We expect this uptick in Executive Branch activity will continue in parallel with legislative efforts in Congress.
C. Geopolitical Competition and AI
Congress is particularly focused on competition with China for technology leadership, and has taken steps to both promote U.S. innovation in foundational technologies, such as AI, and to restrict the transfer of critical emerging technologies to “foreign entities of concern,” including China.
In July, the Senate voted 91-6 to add the Outbound Investment Transparency Act, which covers AI, as an amendment to the FY2024 National Defense Authorization Act (NDAA). The bill would requires notification to the Treasury Department of certain foreign investment activities involving AI, as well as semiconductors, quantum computers, and other sensitive technologies. While the House-passed NDAA does not include any outbound investment provisions, some Members of the House are advocating for imposing stricter sanctions on companies in China.
The Biden Administration has also taken its own action to address outbound investments in “countries of concern.” President Biden issued an executive order in August imposing restrictions on U.S. persons undertaking certain outbound transactions involving national security-sensitive technologies in the artificial intelligence, semiconductor, and quantum computing sectors. The order—which will be implemented by regulations issued by the Treasury Department—prohibits certain transactions and requires U.S. parties engaged in other transactions to notify the Treasury Department. We expect the NDAA conference process will include efforts to codify and enhance the rules proposed in the executive order. Legislation that codifies or modifies the order would give Congress a greater role in oversight of investment restrictions on key technologies like AI.
IV. Thought Leadership
Our public policy and regulatory teams closely track and contribute to the discussion around AI policy in the United States. Below is a sampling of related articles on our public-facing blogs:
- Covington Alert – U.S. Artificial Intelligence Policy: Legislative and Regulatory Developments (Oct. 17, 2023)
- U.S. Tech Legislative & Regulatory Update – Third Quarter 2023 (Oct. 4, 2023)
- Senators Release Bipartisan Framework for AI Legislation (Sept. 14, 2023)
- Framework for the Future of AI: Senator Cassidy Issues White Paper, Seeks Public Feedback (Sept. 14, 2023)
- FEC Seeks Comment on AI Petition After Earlier Deadlock, But New Rules Remain Elusive (Aug. 21, 2023)
- The Federal Trade Commission and Generative AI Competition Concerns (July 12, 2023)
- Senator Schumer Unveils New Two-Part Proposal to Regulate AI (June 21, 2023)
* * *