With Congress in summer recess and state legislative sessions waning, the Biden Administration continues to implement its October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (“EO”).  On July 26, the White House announced a series of federal agency actions under the EO for managing AI safety and security risks, hiring AI talent in the government workforce, promoting AI innovation, and advancing US global AI leadership.  On the same day, the Department of Commerce released new guidance on AI red-team testing, secure AI software development, generative AI risk management, and a plan for promoting and developing global AI standards.  These announcements—which the White House emphasized were on time within the 270-day deadline set by the EO—mark the latest in a series of federal agency activities to implement the EO.

AI Red-Teaming Testing and Risk Management 

On July 26, NIST’s U.S. AI Safety Institute (“US AISI”) released initial public draft guidelines on “Managing Misuse Risk for Dual-Use Foundation Models.” The guidelines outline practices for preventing malicious actors from recreating foundation models or deploying them to harm the public and include recommendations for developer red-teaming testing.  US AISI is accepting public comments on the draft until September 9, 2024.

NIST released the final version of its Generative AI Profile and companion resource.  The Generative AI Profile applies NIST’s 2023 AI Risk Management Framework to 12 risks that are “unique to or exacerbated by” generative AI, including data privacy and information security, environmental impacts, harmful bias, harmful or obscene synthetic content, and IP risks. 

These developments build on NIST’s May 28, 2024, release of its Assessing Risks and Impacts of AI (“ARIA”) pilot program, a test environment focusing on testing the risks and impacts of large language models, and the National Science Foundation’s July 23, 2024, launch of an AI test beds initiative for studying AI methods and systems and January 24, 2024, launch of the National AI Research Resource (“NAIRR”) pilot for compiling and sharing AI resources.

Secure AI Software Development Practices

NIST released the finalized version of its publication on “Secure Software Development Practices for Generative AI and Dual-Use Foundation Models,” supplementing NIST’s 2022 Secure Software Development Framework document.  The new companion resource addresses risks related to malicious training data that can adversely affect the performance of generative AI systems.   NIST also released Dioptra, an open-source software package designed to help AI developers and customers test the resiliency of AI models against adversarial attacks.

Global AI Standards

On July 26, the White House released the final version of its Implementation Roadmap for the May 2023 U.S. National Standards Strategy for Critical and Emerging Technology (“NSSCET”), following public comment on a draft roadmap released in June.  That same day, NIST also released the final version of its “Plan for Global Engagement on AI Standards” (“Plan”), which incorporates stakeholder feedback and public comments on an earlier draft.  Incorporating principles from the NIST AI Risk Management Framework and the US NSSCET, the Plan identifies over a dozen AI topic areas with clear or pressing needs for standardization, including defining key terminology for AI concepts, methods and metrics for assessing AI performance, risks, and benefits, and practices for maintaining and processing AI training data. 

These recent federal agency actions are just a subset of ongoing activities to implement the Biden Administration’s AI EO, and we anticipate more AI initiatives and developments as the White House approaches the one-year anniversary of the EO in October.  These efforts build on a number of agency efforts in the first half of 2024, including a Commerce Department proposed rule regulating infrastructure-as-a-service providers and guidance on the use of AI by federal agencies from the White House Office of Management and Budget. 

*                      *                      *

Follow our Global Policy WatchInside Global Tech, and Inside Privacy blogs for ongoing updates on key AI and other technology legislative and regulatory developments.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Matthew Shapanka Matthew Shapanka

Matthew Shapanka practices at the intersection of law, policy, and politics, advising clients on important legislative, regulatory and enforcement matters before Congress, state legislatures, and government agencies that present significant legal, political, and business opportunities and risks.

Drawing on more than 15 years…

Matthew Shapanka practices at the intersection of law, policy, and politics, advising clients on important legislative, regulatory and enforcement matters before Congress, state legislatures, and government agencies that present significant legal, political, and business opportunities and risks.

Drawing on more than 15 years of experience on Capitol Hill, private practice, state government, and political campaigns, Matt develops and executes complex, multifaceted public policy initiatives for clients seeking actions by Congress, state legislatures, and federal and state government agencies. He regularly counsels businesses—especially technology companies—on matters involving intellectual property, national security, and regulation of critical and emerging technologies like artificial intelligence and autonomous vehicles.

Matt rejoined Covington after serving as Chief Counsel for the U.S. Senate Committee on Rules and Administration, where he advised Chairwoman Amy Klobuchar (D-MN) on all legal, policy, and oversight matters before the Committee, particularly federal election and campaign finance law, Federal Election Commission nominations, and oversight of the legislative branch, including U.S. Capitol security after the January 6, 2021 attack and the rules and procedures governing the Senate. Most significantly, Matt led the Committee’s staff work on the Electoral Count Reform Act – a landmark bipartisan law that updates the procedures for certifying and counting votes in presidential elections—and the Committee’s joint bipartisan investigation (with the Homeland Security Committee) into the security planning and response to the January 6th attack.

Both in Congress and at Covington, Matt has prepared dozens of corporate and nonprofit executives, academics, government officials, and presidential nominees for testimony at congressional committee hearings and depositions. He is a skilled legislative drafter who has composed dozens of bills and amendments introduced in Congress and state legislatures, including several that have been enacted into law across multiple policy areas. Matt also leads the firm’s state policy practice, advising clients on complex multistate legislative and regulatory policy matters and managing state advocacy efforts.

In addition to his policy work, Matt advises and represents clients on the full range of political law compliance and enforcement matters involving federal election, campaign finance, lobbying, and government ethics laws, the Securities and Exchange Commission’s “Pay-to-Play” rule, and the election and political laws of states and municipalities across the country.

Before law school, Matt served in the administration of former Governor Deval Patrick (D-MA) as a research analyst in the Massachusetts Recovery & Reinvestment Office, where he worked on policy, communications, and compliance matters for federal economic recovery funding awarded to the state. He has also staffed federal, state, and local political candidates in Massachusetts and New Hampshire.

Photo of August Gweon August Gweon

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks…

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks, and policy trends.

August regularly provides advice to clients on privacy and competition frameworks and AI regulations, with an increasing focus on U.S. state AI legislative developments and trends related to synthetic content, automated decision-making, and generative AI. He also assists clients in assessing federal and state privacy regulations like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in public policy discussions and rulemaking processes.