With Congress in summer recess and state legislative sessions waning, the Biden Administration continues to implement its October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (“EO”).  On July 26, the White House announced a series of federal agency actions under the EO for managing AI safety and security risks, hiring AI talent in the government workforce, promoting AI innovation, and advancing US global AI leadership.  On the same day, the Department of Commerce released new guidance on AI red-team testing, secure AI software development, generative AI risk management, and a plan for promoting and developing global AI standards.  These announcements—which the White House emphasized were on time within the 270-day deadline set by the EO—mark the latest in a series of federal agency activities to implement the EO.

AI Red-Teaming Testing and Risk Management 

On July 26, NIST’s U.S. AI Safety Institute (“US AISI”) released initial public draft guidelines on “Managing Misuse Risk for Dual-Use Foundation Models.” The guidelines outline practices for preventing malicious actors from recreating foundation models or deploying them to harm the public and include recommendations for developer red-teaming testing.  US AISI is accepting public comments on the draft until September 9, 2024.

NIST released the final version of its Generative AI Profile and companion resource.  The Generative AI Profile applies NIST’s 2023 AI Risk Management Framework to 12 risks that are “unique to or exacerbated by” generative AI, including data privacy and information security, environmental impacts, harmful bias, harmful or obscene synthetic content, and IP risks. 

These developments build on NIST’s May 28, 2024, release of its Assessing Risks and Impacts of AI (“ARIA”) pilot program, a test environment focusing on testing the risks and impacts of large language models, and the National Science Foundation’s July 23, 2024, launch of an AI test beds initiative for studying AI methods and systems and January 24, 2024, launch of the National AI Research Resource (“NAIRR”) pilot for compiling and sharing AI resources.

Secure AI Software Development Practices

NIST released the finalized version of its publication on “Secure Software Development Practices for Generative AI and Dual-Use Foundation Models,” supplementing NIST’s 2022 Secure Software Development Framework document.  The new companion resource addresses risks related to malicious training data that can adversely affect the performance of generative AI systems.   NIST also released Dioptra, an open-source software package designed to help AI developers and customers test the resiliency of AI models against adversarial attacks.

Global AI Standards

On July 26, the White House released the final version of its Implementation Roadmap for the May 2023 U.S. National Standards Strategy for Critical and Emerging Technology (“NSSCET”), following public comment on a draft roadmap released in June.  That same day, NIST also released the final version of its “Plan for Global Engagement on AI Standards” (“Plan”), which incorporates stakeholder feedback and public comments on an earlier draft.  Incorporating principles from the NIST AI Risk Management Framework and the US NSSCET, the Plan identifies over a dozen AI topic areas with clear or pressing needs for standardization, including defining key terminology for AI concepts, methods and metrics for assessing AI performance, risks, and benefits, and practices for maintaining and processing AI training data. 

These recent federal agency actions are just a subset of ongoing activities to implement the Biden Administration’s AI EO, and we anticipate more AI initiatives and developments as the White House approaches the one-year anniversary of the EO in October.  These efforts build on a number of agency efforts in the first half of 2024, including a Commerce Department proposed rule regulating infrastructure-as-a-service providers and guidance on the use of AI by federal agencies from the White House Office of Management and Budget. 

*                      *                      *

Follow our Global Policy WatchInside Global Tech, and Inside Privacy blogs for ongoing updates on key AI and other technology legislative and regulatory developments.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Matthew Shapanka Matthew Shapanka

Matthew Shapanka practices at the intersection of law, policy, and politics, developing strategies to guide businesses facing complex legislative, regulatory, and investigative matters. Matt draws on more than 15 years of experience across Capitol Hill, private practice, state government, and political campaigns to…

Matthew Shapanka practices at the intersection of law, policy, and politics, developing strategies to guide businesses facing complex legislative, regulatory, and investigative matters. Matt draws on more than 15 years of experience across Capitol Hill, private practice, state government, and political campaigns to advise clients on leading-edge policy issues involving artificial intelligence, semiconductors, connected and autonomous vehicles, and other critical and emerging technologies.

Matt works with clients to develop and execute complex public policy initiatives that involve legal, political, and reputational risks. He regularly assists clients to:

Develop public policy strategies
Draft federal and state legislation and regulations
Analyze legislation, regulations, and other government initiatives
Craft testimony, regulatory comments, fact sheets, letters and other advocacy materials
Prepare company executives and other witnesses to testify before Congress, state legislatures, and regulatory bodies
Represent clients before Congress, the White House, federal agencies, state legislatures, and state regulatory agencies
Build and manage policy advocacy coalitions

He advises clients across multiple policy areas, including matters involving regulation of critical and emerging technologies like artificial intelligence, connected and autonomous vehicles, and semiconductors; national security; intellectual property; antitrust; financial services technologies (“fintech”); food and beverage regulation; COVID-19 pandemic response and recovery; and election administration and campaign finance.

Matt rejoined Covington after serving as Chief Counsel for the U.S. Senate Committee on Rules and Administration, where he advised Chairwoman Amy Klobuchar (D-MN) on all legal, policy, and oversight matters before the Committee. Most significantly, Matt staffed the Committee in passing the Electoral Count Reform Act – a landmark bipartisan law that updates the procedures for certifying and counting votes in presidential elections—and the Committee’s bipartisan joint investigation (with the Homeland Security Committee) into the security planning and response to the January 6, 2021 attack on the Capitol.

Both in Congress and at Covington, Matt has prepared dozens of corporate and nonprofit executives, academics, government officials, and presidential nominees for testimony at congressional committee hearings and depositions. He is a skilled legislative drafter who has composed dozens of bills and amendments introduced in Congress and state legislatures, including several that have been enacted into law across multiple policy areas. Matt also leads the firm’s state policy practice, advising clients on complex multistate legislative and regulatory matters and managing state-level advocacy efforts.

In addition to his policy work, as a member of Covington’s nationally recognized (Chambers Band 1) Election and Political Law Practice Group, Matt advises and represents clients on the full range of political law compliance and enforcement matters, including:

Federal election, campaign finance, lobbying, and government ethics laws
The Securities and Exchange Commission’s “Pay-to-Play” rule
Election and political laws of states and municipalities across the country

Before law school, Matt served in the administration of former Governor Deval Patrick (D-MA), where he worked on policy, communications, and compliance matters for federal economic recovery funding awarded to the state. He has also staffed federal, state, and local political candidates in Massachusetts and New Hampshire.

Photo of August Gweon August Gweon

August Gweon counsels national and multinational companies on new regulatory frameworks governing artificial intelligence, robotics, and other emerging technologies, digital services, and digital infrastructure. August leverages his AI and technology policy experiences to help clients understand AI industry developments, emerging risks, and policy…

August Gweon counsels national and multinational companies on new regulatory frameworks governing artificial intelligence, robotics, and other emerging technologies, digital services, and digital infrastructure. August leverages his AI and technology policy experiences to help clients understand AI industry developments, emerging risks, and policy and enforcement trends. He regularly advises clients on AI governance, risk management, and compliance under data privacy, consumer protection, safety, procurement, and platform laws.

August’s practice includes providing comprehensive advice on U.S. state and federal AI policies and legislation, including the Colorado AI Act and state laws regulating automated decision-making technologies, AI-generated content, generative AI systems and chatbots, and foundation models. He also assists clients in assessing risks and compliance under federal and state privacy laws like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in AI public policy advocacy and rulemaking.