Photo of Matthew Shapanka

Matthew Shapanka

Matthew Shapanka practices at the intersection of law, policy, and politics. He advises clients before Congress, state legislatures, and government agencies, helping businesses to navigate complex legislative, regulatory, and investigations matters, mitigate their legal, political, and reputational risks, and capture business opportunities.

Drawing on more than 15 years of experience on Capitol Hill and in private practice, state government, and political campaigns, Matt develops and executes complex, multifaceted public policy initiatives for clients seeking actions by Congress, state legislatures, and federal and state government agencies. He regularly counsels and represents businesses in legislative and regulatory matters involving intellectual property, national security, regulation of critical and emerging technologies like artificial intelligence, connected and autonomous vehicles, and other tech policy issues. He also represents clients facing congressional investigations or inquiries across a range of committees and subject matters.

Matt rejoined Covington after serving as Chief Counsel for the U.S. Senate Committee on Rules and Administration, where he advised Chairwoman Amy Klobuchar (D-MN) on all legal, policy, and oversight matters before the Committee, particularly federal election and campaign finance law, Federal Election Commission nominations, and oversight of the legislative branch. Most significantly, Matt led the Committee’s staff work on the Electoral Count Reform Act – a landmark bipartisan law that updates the procedures for certifying and counting votes in presidential elections—and the Committee’s bipartisan joint investigation (with the Homeland Security Committee) into the security planning and response to the January 6th attack.

Both in Congress and at Covington, Matt has prepared dozens of corporate and nonprofit executives, academics, government officials, and presidential nominees for testimony at congressional committee hearings and depositions. He is a skilled legislative drafter who has composed dozens of bills and amendments introduced in Congress and state legislatures, including several that have been enacted into law across multiple policy areas. Matt also leads the firm’s state policy practice, advising clients on complex multistate legislative and regulatory matters and managing state-level advocacy efforts.

In addition to his policy work, Matt advises and represents clients on the full range of political law compliance and enforcement matters involving federal election, campaign finance, lobbying, and government ethics laws, the Securities and Exchange Commission’s “Pay-to-Play” rule, and the election and political laws of states and municipalities across the country.

Before law school, Matt served in the administration of former Governor Deval Patrick (D-MA) as a research analyst in the Massachusetts Recovery & Reinvestment Office, where he worked on policy, communications, and compliance matters for federal economic recovery funding awarded to the state. He has also staffed federal, state, and local political candidates in Massachusetts and New Hampshire.

On January 29, Senator Josh Hawley (R-MO) introduced the Decoupling America’s Artificial Intelligence Capabilities from China Act (S. 321), one of the first bills of 119th Congress to address escalating U.S. competition with China on artificial intelligence.  The new legislation comes just days after Chinese AI company DeepSeek launched its R1 AI model with advanced capabilities that has been widely viewed as a possible turning point in the U.S.-China AI race.  If enacted, S. 321 would impose sweeping prohibitions on U.S. imports and exports of AI and generative AI technologies and R&D to and from China and bar U.S. investments in AI technology developed or produced in China.  The bill, which was referred to the Senate Judiciary Committee, had no cosponsors and no House companion at the time of introduction.

Specifically, S. 321 would prohibit U.S. persons—including any corporation or educational or research institution in the U.S. or controlled by U.S. citizens or permanent residents—from (1) exporting AI or generative AI technology or IP to China or (2) importing AI or generative AI technology or IP developed or produced in China.  In addition, the bill would bar U.S. persons from transferring AI or generative AI research to China or Chinese educational institutions, research institutions, corporations, or government entities (“Chinese entities of concern”), or from conducting AI or generative AI R&D within China or for, on behalf of, or in collaboration with such entities.

Finally, the bill would prohibit any U.S. person from financing AI R&D with connections to China.  The bill specifically prohibits U.S. persons from “holding or managing any interest in” or extending loans or lines of credit to Chinese entities of concern that conduct AI- or generative AI-related R&D, produce goods that incorporate AI or generative AI R&D, assist with Chinese military or surveillance capabilities, or are implicated in human rights abuses. Continue Reading Senator Hawley Introduces Sweeping U.S.-China AI Decoupling Bill

U.S. Secretary of Commerce nominee Howard Lutnick delivered a detailed preview of what to expect from the Trump Administration on key issues around technology, trade, and intellectual property.  At his nomination hearing before the Senate Committee on Commerce, Science, and Transportation on Wednesday, January 29, Lutnick faced questions from senators about the future of the CHIPS and Science Act, global trade, and particularly U.S. technological competition with China, including export controls and artificial intelligence after the release of China’s AI model “DeepSeek.”  Lutnick, who was introduced by Vice President J.D. Vance, committed to implementing the Trump Administration’s America First agenda. 

If confirmed, Lutnick will lead the Commerce Department’s vast policy portfolio, including export controls for emerging technologies, broadband spectrum access and deployment, AI innovation, and climate and weather issues through the National Oceanic and Atmospheric Administration (“NOAA”).  In his responses to senators’ questions, Lutnick emphasized his pro-business approach and his intent to implement President Trump’s policy objectives including bringing manufacturing—particularly of semiconductors—back to the United States and establishing “reciprocity” with China in response to what he called “unfair” treatment of U.S. businesses.Continue Reading What Commerce Secretary Nominee Howard Lutnick’s Confirmation Hearing Tells Us about Technology Policy in the Trump Administration

The results of the 2024 U.S. election are expected to have significant implications for AI legislation and regulation at both the federal and state level. 

Like the first Trump Administration, the second Trump Administration is likely to prioritize AI innovation, R&D, national security uses of AI, and U.S. private sector investment and leadership in AI.  Although recent AI model testing and reporting requirements established by the Biden Administration may be halted or revoked, efforts to promote private-sector innovation and competition with China are expected to continue.  And while antitrust enforcement involving large technology companies may continue in the Trump Administration, more prescriptive AI rulemaking efforts such as those launched by the current leadership of the Federal Trade Commission (“FTC”) are likely to be curtailed substantially.

In the House and Senate, Republican majorities are likely to adopt priorities similar to those of the Trump Administration, with a continued focus on AI-generated deepfakes and prohibitions on the use of AI for government surveillance and content moderation. 

At the state level, legislatures in California, Texas, Colorado, Connecticut, and others likely will advance AI legislation on issues ranging from algorithmic discrimination to digital replicas and generative AI watermarking. 

This post covers the effects of the recent U.S. election on these areas and what to expect as we enter 2025.  (Click here for our summary of the 2024 election implications on AI-related industrial policy and competition with China.)Continue Reading U.S. AI Policy Expectations in the Trump Administration, GOP Congress, and the States

Technology companies will be in for a bumpy ride in the second Trump Administration.  President-elect Trump has promised to adopt policies that will accelerate the United States’ technological decoupling from China.  However, he will likely take a more hands-off approach to regulating artificial intelligence and reverse several Biden Administration policies related to AI and other emerging technologies.Continue Reading Tech Policy in a Second Trump Administration: AI Promotion and Further Decoupling from China

On October 28, Texas State Representative Giovanni Capriglione (R-Tarrant County) released a draft of the Texas Responsible AI Governance Act (“TRAIGA”), after nearly a year collecting input from industry stakeholders.  Representative Capriglione, who authored Texas’s Data Privacy and Security Act (discussed here) and currently co-chairs the state’s AI Advisory Council, appears likely to introduce TRAIGA in the upcoming legislative session scheduled to begin on January 14, 2025.  Modeled after the Colorado AI Act (SB 205) (discussed here) and the EU AI Act, TRAIGA would establish obligations for developers, deployers, and distributors of “high-risk AI systems.”  Additionally, TRAIGA would establish an “AI Regulatory Sandbox Program” for participating AI developers to test AI systems under a statutory exemption.

Although a number of states have expressed significant interest in AI regulation, if passed, Texas would become the second state to enact industry-agnostic, risk-based AI legislation, following the passage of the Colorado AI Act in May.  There is significant activity in other states as well, as the California Privacy Protection Agency considers rules that would apply to certain automated decision and AI systems, and other states are expected to introduce AI legislation in the new session.  In addition to its requirements for high-risk AI and its AI sandbox program, TRAIGA would amend Texas’s Data Privacy and Security Act to incorporate AI-specific provisions and would provide for an AI workforce grant program and a new “AI Council” to provide advisory opinions and guidance on AI.Continue Reading Texas Legislature to Consider Sweeping AI Legislation in 2025

On September 29, California Governor Gavin Newsom (D) vetoed the Safe & Secure Innovation for Frontier AI Models Act (SB 1047), putting an end, for now, to a months-long effort to establish public safety standards for developers of large AI systems.  SB 1047’s sweeping AI safety and security regime, which included annual third-party safety audits, shutdown capabilities, detailed safety and security protocols, and incident reporting requirements, would likely have established a de facto national safety standard for large AI models if enacted.  The veto followed rare public calls from Members of California’s congressional delegation—including Speaker Emerita Nancy Pelosi (D-CA) and Representatives Ro Khanna (D-CA), Anna Eschoo (D-CA), Zoe Lofgren (D-CA), and Jay Obernolte (R-CA)—for the governor to reject the bill.

In his veto message, Governor Newsom noted that “[AI] safety protocols must be adopted” with “[p]roactive guardrails” and “severe consequences for bad actors,” but he criticized SB 1047 for regulating based on the “cost and number of computations needed to develop an AI model.” SB 1047 would have defined “covered models” as AI models trained using more than 1026 FLOPS of computational power valued at more than $100 million.  In relying on cost and computing thresholds rather than “the system’s actual risks,” Newsom argued that SB 1047 “applies stringent standards to even the most basic functions–so long as a large system deploys it.”  Newsom added that SB 1047 could “give the public a false sense of security about controlling this fast-moving technology” while “[s]maller, specialized models” could be “equally or even more dangerous than the models targeted by SB 1047.” Continue Reading California Governor Vetoes AI Safety Bill

On August 29, California lawmakers passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), marking yet another major development in states’ efforts to regulate AI.  The legislation, which draws on concepts from the White House’s 2023 AI Executive Order (“AI EO”), follows months of high-profile debate and amendments and would establish an expansive AI safety and security regime for developers of “covered models.”  Governor Gavin Newsom (D) has until September 30 to sign or veto the bill. 

If signed into law, SB 1047 would join Colorado’s SB 205—the landmark AI anti-discrimination law passed in May and covered here—as another de facto standard for AI legislation in the United States in the absence of congressional action.  In contrast to Colorado SB 205’s focus on algorithmic discrimination risks for consumers, however, SB 1047 would address AI models that are technically capable of causing or materially enabling “critical harms” to public safety. Continue Reading California Legislature Passes Landmark AI Safety Legislation

With Congress in summer recess and state legislative sessions waning, the Biden Administration continues to implement its October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (“EO”).  On July 26, the White House announced a series of federal agency actions under the EO for managing AI safety and security risks, hiring AI talent in the government workforce, promoting AI innovation, and advancing US global AI leadership.  On the same day, the Department of Commerce released new guidance on AI red-team testing, secure AI software development, generative AI risk management, and a plan for promoting and developing global AI standards.  These announcements—which the White House emphasized were on time within the 270-day deadline set by the EO—mark the latest in a series of federal agency activities to implement the EO.Continue Reading Federal Agencies Continue Implementation of AI Executive Order

This update focuses on how growing quantum sector investment in the UK and US is leading to the development and commercialization of quantum computing technologies with the potential to revolutionize and disrupt key sectors.  This is a fast-growing area that is seeing significant levels of public and private investment activity.  We take a look at how approaches differ in the UK and US, and discuss how a concerted, international effort is needed both to realize the full potential of quantum technologies and to mitigate new risks that may arise as the technology matures.

Quantum Computing

Quantum computing uses quantum mechanics principles to solve certain complex mathematical problems faster than classical computers.  Whilst classical computers use binary “bits” to perform calculations, quantum computers use quantum bits (“qubits”).  The value of a bit can only be zero or one, whereas a qubit can exist as zero, one, or a combination of both states (a phenomenon known as superposition) allowing quantum computers to solve certain problems exponentially faster than classical computers. 

The applications of quantum technologies are wide-ranging and quantum computing has the potential to revolutionize many sectors, including life-sciences, climate and weather modelling, financial portfolio management and artificial intelligence (“AI”).  However, advances in quantum computing may also lead to some risks, the most significant being to data protection.  Hackers could exploit the ability of quantum computing to solve complex mathematical problems at high speeds to break currently used cryptography methods and access personal and sensitive data. 

This is a rapidly developing area that governments are only just turning their attention to.  Governments are focusing not just on “quantum-readiness” and countering the emerging threats that quantum computing will present in the hands of bad actors (the US, for instance, is planning the migration of sensitive data to post-quantum encryption), but also on ramping up investment and growth in quantum technologies. Continue Reading Quantum Computing: Developments in the UK and US

With most state legislative sessions across the country adjourned or winding down without enacting significant artificial intelligence legislation, Colorado and California continue their steady drive to adopt comprehensive legislation regulating the development and deployment of AI systems. 

Colorado

Although Colorado’s AI law (SB 205), which Governor Jared Polis (D) signed into law in May, does not take effect until February 1, 2026, lawmakers have already begun a process for refining the nation’s first comprehensive AI law.  As we described here, the new law will require developers and deployers of “high-risk” AI systems to comply with certain requirements in order to mitigate risks of algorithmic discrimination. 

On June 13, Governor Polis, Attorney General Phil Weiser (D), and Senate Majority Leader Robert Rodriguez (D) issued a public letter announcing a “process to revise” the new law before it even takes effect, and “minimize unintended consequences associated with its implementation.”  The revision process will address concerns that the high cost of compliance will adversely affect “home grown businesses” in Colorado, including through “barriers to growth and product development, job losses, and a diminished capacity to raise capital.”Continue Reading Colorado and California Continue to Refine AI Legislation as Legislative Sessions Wane