Photo of Matthew Shapanka

Matthew Shapanka

Matthew Shapanka practices at the intersection of law, policy, and politics. He advises clients before Congress, state legislatures, and government agencies, helping businesses to navigate complex legislative, regulatory, and investigations matters, mitigate their legal, political, and reputational risks, and capture business opportunities.

Drawing on more than 15 years of experience on Capitol Hill and in private practice, state government, and political campaigns, Matt develops and executes complex, multifaceted public policy initiatives for clients seeking actions by Congress, state legislatures, and federal and state government agencies. He regularly counsels and represents businesses in legislative and regulatory matters involving intellectual property, national security, regulation of critical and emerging technologies like artificial intelligence, connected and autonomous vehicles, and other tech policy issues. He also represents clients facing congressional investigations or inquiries across a range of committees and subject matters.

Matt rejoined Covington after serving as Chief Counsel for the U.S. Senate Committee on Rules and Administration, where he advised Chairwoman Amy Klobuchar (D-MN) on all legal, policy, and oversight matters before the Committee, particularly federal election and campaign finance law, Federal Election Commission nominations, and oversight of the legislative branch. Most significantly, Matt led the Committee’s staff work on the Electoral Count Reform Act – a landmark bipartisan law that updates the procedures for certifying and counting votes in presidential elections—and the Committee’s bipartisan joint investigation (with the Homeland Security Committee) into the security planning and response to the January 6th attack.

Both in Congress and at Covington, Matt has prepared dozens of corporate and nonprofit executives, academics, government officials, and presidential nominees for testimony at congressional committee hearings and depositions. He is a skilled legislative drafter who has composed dozens of bills and amendments introduced in Congress and state legislatures, including several that have been enacted into law across multiple policy areas. Matt also leads the firm’s state policy practice, advising clients on complex multistate legislative and regulatory matters and managing state-level advocacy efforts.

In addition to his policy work, Matt advises and represents clients on the full range of political law compliance and enforcement matters involving federal election, campaign finance, lobbying, and government ethics laws, the Securities and Exchange Commission’s “Pay-to-Play” rule, and the election and political laws of states and municipalities across the country.

Before law school, Matt served in the administration of former Governor Deval Patrick (D-MA) as a research analyst in the Massachusetts Recovery & Reinvestment Office, where he worked on policy, communications, and compliance matters for federal economic recovery funding awarded to the state. He has also staffed federal, state, and local political candidates in Massachusetts and New Hampshire.

Since taking office, President Trump has issued dozens of executive orders, many addressing key technology policy areas that include international trade and investment, artificial intelligence (AI),  connected vehicles and drones, and trade controls.  Some of these executive actions reverse the previous administration’s efforts on these issues—such as the order revoking President Biden’s October 2023 executive order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence—and others initiate a formal review process, suggesting the Trump Administration will preserve, and perhaps strengthen or enhance, key tech policies implemented by the Biden Administration and the first Trump term.  

Several of the executive actions President Trump has taken so far offer important opportunities for stakeholders to weigh in with Executive Branch agencies as they consider next steps, including whether to revoke, expand, or retain tech policies initiated under President Biden. Key initiatives include: Continue Reading Flurry of Trump Administration Executive Orders Shakes Up Tech Policy, Creates Industry Opportunities

This is the first in a new series of Covington blogs on the AI policies, executive orders, and other actions of the new Trump Administration.  This blog describes key actions on AI taken by the Trump Administration in January 2025.

Outgoing President Biden Issues Executive Order and Data Center Guidance for AI Infrastructure

Before turning to the Trump Administration, we note one key AI development from the final weeks of the Biden Administration.  On January 14, in one of his final acts in office, President Biden issued Executive Order 14141 on “Advancing United States Leadership in AI Infrastructure.”  This EO, which remains in force, sets out requirements and deadlines for the construction and operation of “frontier AI infrastructure,” including data centers and clean energy facilities, by private-sector entities on federal land.  Specifically, EO 14141 directs the Departments of Defense (“DOD”) and Energy (“DOE”) to lease federal lands for the construction and operation of AI data centers and clean energy facilities by the end of 2027, establishes solicitation and lease application processes for private sector applicants, directs federal agencies to take various steps to streamline and consolidate environmental permitting for AI infrastructure, and directs the DOE to take steps to update the U.S. electricity grid to meet the growing energy demands of AI. Continue Reading January 2025 AI Developments – Transitioning to the Trump Administration

On February 6, the White House Office of Science & Technology Policy (“OSTP”) and National Science Foundation (“NSF”) issued a Request for Information (“RFI”) seeking public input on the “Development of an Artificial Intelligence Action Plan.”  The RFI marks a first step toward the implementation of the Trump Administration’s January 23 Executive Order 14179, “Removing Barriers to American Leadership in Artificial Intelligence” (the “EO”).  Specifically, the EO directs Assistant to the President for Science & Technology (and OSTP Director nominee) Michael Kratsios, White House AI & Crypto Czar David Sacks, and National Security Advisor Michael Waltz to “develop and submit to the President an action plan” to achieve the EO’s policy of “sustain[ing] and enhance[ing] America’s global AI dominance” to “promote human flourishing, economic competitiveness, and national security.” Continue Reading Trump Administration Seeks Public Comment on AI Action Plan

On January 29, Senator Josh Hawley (R-MO) introduced the Decoupling America’s Artificial Intelligence Capabilities from China Act (S. 321), one of the first bills of 119th Congress to address escalating U.S. competition with China on artificial intelligence.  The new legislation comes just days after Chinese AI company DeepSeek launched its R1 AI model with advanced capabilities that has been widely viewed as a possible turning point in the U.S.-China AI race.  If enacted, S. 321 would impose sweeping prohibitions on U.S. imports and exports of AI and generative AI technologies and R&D to and from China and bar U.S. investments in AI technology developed or produced in China.  The bill, which was referred to the Senate Judiciary Committee, had no cosponsors and no House companion at the time of introduction.

Specifically, S. 321 would prohibit U.S. persons—including any corporation or educational or research institution in the U.S. or controlled by U.S. citizens or permanent residents—from (1) exporting AI or generative AI technology or IP to China or (2) importing AI or generative AI technology or IP developed or produced in China.  In addition, the bill would bar U.S. persons from transferring AI or generative AI research to China or Chinese educational institutions, research institutions, corporations, or government entities (“Chinese entities of concern”), or from conducting AI or generative AI R&D within China or for, on behalf of, or in collaboration with such entities.

Finally, the bill would prohibit any U.S. person from financing AI R&D with connections to China.  The bill specifically prohibits U.S. persons from “holding or managing any interest in” or extending loans or lines of credit to Chinese entities of concern that conduct AI- or generative AI-related R&D, produce goods that incorporate AI or generative AI R&D, assist with Chinese military or surveillance capabilities, or are implicated in human rights abuses. Continue Reading Senator Hawley Introduces Sweeping U.S.-China AI Decoupling Bill

U.S. Secretary of Commerce nominee Howard Lutnick delivered a detailed preview of what to expect from the Trump Administration on key issues around technology, trade, and intellectual property.  At his nomination hearing before the Senate Committee on Commerce, Science, and Transportation on Wednesday, January 29, Lutnick faced questions from senators about the future of the CHIPS and Science Act, global trade, and particularly U.S. technological competition with China, including export controls and artificial intelligence after the release of China’s AI model “DeepSeek.”  Lutnick, who was introduced by Vice President J.D. Vance, committed to implementing the Trump Administration’s America First agenda. 

If confirmed, Lutnick will lead the Commerce Department’s vast policy portfolio, including export controls for emerging technologies, broadband spectrum access and deployment, AI innovation, and climate and weather issues through the National Oceanic and Atmospheric Administration (“NOAA”).  In his responses to senators’ questions, Lutnick emphasized his pro-business approach and his intent to implement President Trump’s policy objectives including bringing manufacturing—particularly of semiconductors—back to the United States and establishing “reciprocity” with China in response to what he called “unfair” treatment of U.S. businesses.Continue Reading What Commerce Secretary Nominee Howard Lutnick’s Confirmation Hearing Tells Us about Technology Policy in the Trump Administration

The results of the 2024 U.S. election are expected to have significant implications for AI legislation and regulation at both the federal and state level. 

Like the first Trump Administration, the second Trump Administration is likely to prioritize AI innovation, R&D, national security uses of AI, and U.S. private sector investment and leadership in AI.  Although recent AI model testing and reporting requirements established by the Biden Administration may be halted or revoked, efforts to promote private-sector innovation and competition with China are expected to continue.  And while antitrust enforcement involving large technology companies may continue in the Trump Administration, more prescriptive AI rulemaking efforts such as those launched by the current leadership of the Federal Trade Commission (“FTC”) are likely to be curtailed substantially.

In the House and Senate, Republican majorities are likely to adopt priorities similar to those of the Trump Administration, with a continued focus on AI-generated deepfakes and prohibitions on the use of AI for government surveillance and content moderation. 

At the state level, legislatures in California, Texas, Colorado, Connecticut, and others likely will advance AI legislation on issues ranging from algorithmic discrimination to digital replicas and generative AI watermarking. 

This post covers the effects of the recent U.S. election on these areas and what to expect as we enter 2025.  (Click here for our summary of the 2024 election implications on AI-related industrial policy and competition with China.)Continue Reading U.S. AI Policy Expectations in the Trump Administration, GOP Congress, and the States

Technology companies will be in for a bumpy ride in the second Trump Administration.  President-elect Trump has promised to adopt policies that will accelerate the United States’ technological decoupling from China.  However, he will likely take a more hands-off approach to regulating artificial intelligence and reverse several Biden Administration policies related to AI and other emerging technologies.Continue Reading Tech Policy in a Second Trump Administration: AI Promotion and Further Decoupling from China

On October 28, Texas State Representative Giovanni Capriglione (R-Tarrant County) released a draft of the Texas Responsible AI Governance Act (“TRAIGA”), after nearly a year collecting input from industry stakeholders.  Representative Capriglione, who authored Texas’s Data Privacy and Security Act (discussed here) and currently co-chairs the state’s AI Advisory Council, appears likely to introduce TRAIGA in the upcoming legislative session scheduled to begin on January 14, 2025.  Modeled after the Colorado AI Act (SB 205) (discussed here) and the EU AI Act, TRAIGA would establish obligations for developers, deployers, and distributors of “high-risk AI systems.”  Additionally, TRAIGA would establish an “AI Regulatory Sandbox Program” for participating AI developers to test AI systems under a statutory exemption.

Although a number of states have expressed significant interest in AI regulation, if passed, Texas would become the second state to enact industry-agnostic, risk-based AI legislation, following the passage of the Colorado AI Act in May.  There is significant activity in other states as well, as the California Privacy Protection Agency considers rules that would apply to certain automated decision and AI systems, and other states are expected to introduce AI legislation in the new session.  In addition to its requirements for high-risk AI and its AI sandbox program, TRAIGA would amend Texas’s Data Privacy and Security Act to incorporate AI-specific provisions and would provide for an AI workforce grant program and a new “AI Council” to provide advisory opinions and guidance on AI.Continue Reading Texas Legislature to Consider Sweeping AI Legislation in 2025

On September 29, California Governor Gavin Newsom (D) vetoed the Safe & Secure Innovation for Frontier AI Models Act (SB 1047), putting an end, for now, to a months-long effort to establish public safety standards for developers of large AI systems.  SB 1047’s sweeping AI safety and security regime, which included annual third-party safety audits, shutdown capabilities, detailed safety and security protocols, and incident reporting requirements, would likely have established a de facto national safety standard for large AI models if enacted.  The veto followed rare public calls from Members of California’s congressional delegation—including Speaker Emerita Nancy Pelosi (D-CA) and Representatives Ro Khanna (D-CA), Anna Eschoo (D-CA), Zoe Lofgren (D-CA), and Jay Obernolte (R-CA)—for the governor to reject the bill.

In his veto message, Governor Newsom noted that “[AI] safety protocols must be adopted” with “[p]roactive guardrails” and “severe consequences for bad actors,” but he criticized SB 1047 for regulating based on the “cost and number of computations needed to develop an AI model.” SB 1047 would have defined “covered models” as AI models trained using more than 1026 FLOPS of computational power valued at more than $100 million.  In relying on cost and computing thresholds rather than “the system’s actual risks,” Newsom argued that SB 1047 “applies stringent standards to even the most basic functions–so long as a large system deploys it.”  Newsom added that SB 1047 could “give the public a false sense of security about controlling this fast-moving technology” while “[s]maller, specialized models” could be “equally or even more dangerous than the models targeted by SB 1047.” Continue Reading California Governor Vetoes AI Safety Bill

On August 29, California lawmakers passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), marking yet another major development in states’ efforts to regulate AI.  The legislation, which draws on concepts from the White House’s 2023 AI Executive Order (“AI EO”), follows months of high-profile debate and amendments and would establish an expansive AI safety and security regime for developers of “covered models.”  Governor Gavin Newsom (D) has until September 30 to sign or veto the bill. 

If signed into law, SB 1047 would join Colorado’s SB 205—the landmark AI anti-discrimination law passed in May and covered here—as another de facto standard for AI legislation in the United States in the absence of congressional action.  In contrast to Colorado SB 205’s focus on algorithmic discrimination risks for consumers, however, SB 1047 would address AI models that are technically capable of causing or materially enabling “critical harms” to public safety. Continue Reading California Legislature Passes Landmark AI Safety Legislation