With Congress in summer recess and state legislative sessions waning, the Biden Administration continues to implement its October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (“EO”). On July 26, the White House announced a series of federal agency actions under the EO for managing AI safety and security risks, hiring AI talent in the government workforce, promoting AI innovation, and advancing US global AI leadership. On the same day, the Department of Commerce released new guidance on AI red-team testing, secure AI software development, generative AI risk management, and a plan for promoting and developing global AI standards. These announcements—which the White House emphasized were on time within the 270-day deadline set by the EO—mark the latest in a series of federal agency activities to implement the EO.Continue Reading Federal Agencies Continue Implementation of AI Executive Order
Matthew Shapanka
Matthew Shapanka practices at the intersection of law, policy, and politics, advising clients on important legislative, regulatory and enforcement matters before Congress, state legislatures, and government agencies that present significant legal, political, and business opportunities and risks.
Drawing on more than 15 years of experience on Capitol Hill, private practice, state government, and political campaigns, Matt develops and executes complex, multifaceted public policy initiatives for clients seeking actions by Congress, state legislatures, and federal and state government agencies. He regularly counsels businesses—especially technology companies—on matters involving intellectual property, national security, and regulation of critical and emerging technologies like artificial intelligence and autonomous vehicles.
Matt rejoined Covington after serving as Chief Counsel for the U.S. Senate Committee on Rules and Administration, where he advised Chairwoman Amy Klobuchar (D-MN) on all legal, policy, and oversight matters before the Committee, particularly federal election and campaign finance law, Federal Election Commission nominations, and oversight of the legislative branch, including U.S. Capitol security after the January 6, 2021 attack and the rules and procedures governing the Senate. Most significantly, Matt led the Committee’s staff work on the Electoral Count Reform Act – a landmark bipartisan law that updates the procedures for certifying and counting votes in presidential elections—and the Committee’s joint bipartisan investigation (with the Homeland Security Committee) into the security planning and response to the January 6th attack.
Both in Congress and at Covington, Matt has prepared dozens of corporate and nonprofit executives, academics, government officials, and presidential nominees for testimony at congressional committee hearings and depositions. He is a skilled legislative drafter who has composed dozens of bills and amendments introduced in Congress and state legislatures, including several that have been enacted into law across multiple policy areas. Matt also leads the firm’s state policy practice, advising clients on complex multistate legislative and regulatory policy matters and managing state advocacy efforts.
In addition to his policy work, Matt advises and represents clients on the full range of political law compliance and enforcement matters involving federal election, campaign finance, lobbying, and government ethics laws, the Securities and Exchange Commission’s “Pay-to-Play” rule, and the election and political laws of states and municipalities across the country.
Before law school, Matt served in the administration of former Governor Deval Patrick (D-MA) as a research analyst in the Massachusetts Recovery & Reinvestment Office, where he worked on policy, communications, and compliance matters for federal economic recovery funding awarded to the state. He has also staffed federal, state, and local political candidates in Massachusetts and New Hampshire.
Quantum Computing: Developments in the UK and US
This update focuses on how growing quantum sector investment in the UK and US is leading to the development and commercialization of quantum computing technologies with the potential to revolutionize and disrupt key sectors. This is a fast-growing area that is seeing significant levels of public and private investment activity. We take a look at how approaches differ in the UK and US, and discuss how a concerted, international effort is needed both to realize the full potential of quantum technologies and to mitigate new risks that may arise as the technology matures.
Quantum Computing
Quantum computing uses quantum mechanics principles to solve certain complex mathematical problems faster than classical computers. Whilst classical computers use binary “bits” to perform calculations, quantum computers use quantum bits (“qubits”). The value of a bit can only be zero or one, whereas a qubit can exist as zero, one, or a combination of both states (a phenomenon known as superposition) allowing quantum computers to solve certain problems exponentially faster than classical computers.
The applications of quantum technologies are wide-ranging and quantum computing has the potential to revolutionize many sectors, including life-sciences, climate and weather modelling, financial portfolio management and artificial intelligence (“AI”). However, advances in quantum computing may also lead to some risks, the most significant being to data protection. Hackers could exploit the ability of quantum computing to solve complex mathematical problems at high speeds to break currently used cryptography methods and access personal and sensitive data.
This is a rapidly developing area that governments are only just turning their attention to. Governments are focusing not just on “quantum-readiness” and countering the emerging threats that quantum computing will present in the hands of bad actors (the US, for instance, is planning the migration of sensitive data to post-quantum encryption), but also on ramping up investment and growth in quantum technologies. Continue Reading Quantum Computing: Developments in the UK and US
Colorado and California Continue to Refine AI Legislation as Legislative Sessions Wane
With most state legislative sessions across the country adjourned or winding down without enacting significant artificial intelligence legislation, Colorado and California continue their steady drive to adopt comprehensive legislation regulating the development and deployment of AI systems.
Colorado
Although Colorado’s AI law (SB 205), which Governor Jared Polis (D) signed into law in May, does not take effect until February 1, 2026, lawmakers have already begun a process for refining the nation’s first comprehensive AI law. As we described here, the new law will require developers and deployers of “high-risk” AI systems to comply with certain requirements in order to mitigate risks of algorithmic discrimination.
On June 13, Governor Polis, Attorney General Phil Weiser (D), and Senate Majority Leader Robert Rodriguez (D) issued a public letter announcing a “process to revise” the new law before it even takes effect, and “minimize unintended consequences associated with its implementation.” The revision process will address concerns that the high cost of compliance will adversely affect “home grown businesses” in Colorado, including through “barriers to growth and product development, job losses, and a diminished capacity to raise capital.”Continue Reading Colorado and California Continue to Refine AI Legislation as Legislative Sessions Wane
Bipartisan Senate AI Roadmap Released
Nearly a year after Senate Majority Leader Chuck Schumer (D-NY) launched the SAFE Innovation Framework for artificial intelligence (AI) with Senators Mike Rounds (R-SD), Martin Heinrich (D-NM), and Todd Young (R-IN), the bipartisan group has released a 31-page “Roadmap” for AI policy. The overarching theme of the Roadmap is “harnessing the full potential of AI while minimizing the risks of AI in the near and long term.”
In contrast to Europe’s approach to regulating AI, the Roadmap does not propose or even contemplate a comprehensive AI law. Rather, it identifies key themes and areas of agreement and directs the relevant congressional committees of jurisdiction to legislate on key issues. The Roadmap recommendations are informed by the nine AI Insight Forums that the bipartisan group convened over the last year.Continue Reading Bipartisan Senate AI Roadmap Released
Colorado Becomes the First State to Pass Comprehensive AI Legislation
In the absence of congressional action on comprehensive artificial intelligence (AI) legislation, state legislatures are forging ahead with groundbreaking bills to regulate the rapidly advancing technology. On May 8, the Colorado House of Representatives passed SB 205, a far-reaching and comprehensive AI bill, on a 41-22-2 vote. The final vote comes just days after the state Senate’s passage of the bill on May 3, making Colorado the first state in the nation to send comprehensive AI legislation to its governor for signing. While Governor Jared Polis (D) has not indicated whether he will sign or veto the bill, if SB 205 becomes law, it would establish a broad regulatory regime for developers and deployers of “high-risk” AI systems. Continue Reading Colorado Becomes the First State to Pass Comprehensive AI Legislation
California Establishes Working Guidance for AI Procurement
As the 2024 elections approach and the window for Congress to consider bipartisan comprehensive artificial intelligence (AI) legislation shrinks, California officials are attempting to guard against a generative AI free-for-all—at least with respect to state government use of the rapidly advancing technology—by becoming the largest state to issue rules for state procurement of AI technologies. Without nationwide federal rules, standards set by state government procurement rules may ultimately add another layer of complexity to the patchwork of AI-related rules and standards emerging in the states.
On March 21, 2024, the California Government Operations Agency (GovOps) published interim guidelines for government procurement of generative AI technologies. The new guidance directs state officials responsible for awarding and managing public contracts to identify risks of generative AI, monitor the technology’s use, and train staff on acceptable use, including for procurements that only involve “incidental” AI elements. For “intentional” generative AI procurements, where an agency is specifically seeking to purchase a generative AI product or service, the guidelines impose a higher standard: in addition to the requirements that apply to “incidental” purchases, agencies seeking generative AI technologies are responsible for articulating the need for using generative AI prior to procurement, testing the technology prior to implementation, and establishing a dedicated team to monitor the AI on an ongoing basis.Continue Reading California Establishes Working Guidance for AI Procurement
New Bipartisan Senate Legislation Aims to Bolster U.S. AI Research and Deployment
Senate Commerce Committee Chair Maria Cantwell (D-WA) and Senators Todd Young (R-IN), John Hickenlooper (D-CO), and Marsha Blackburn (R-TN) recently introduced the Future of AI Innovation Act, a legislative package that addresses key bipartisan priorities to promote AI safety, standardization, and access. The bill would also advance U.S. leadership in AI by facilitating R&D and creating testbeds for AI systems.Continue Reading New Bipartisan Senate Legislation Aims to Bolster U.S. AI Research and Deployment
As States Lead Efforts to Address Deepfakes in Political Ads, Federal Lawmakers Seek Nationwide Policies
A New Orleans magician recently made headlines for using artificial intelligence (AI) to emulate President Biden’s voice without his consent in a misleading robocall to New Hampshire voters. This was not a magic trick, but rather a demonstration of the risks AI-generated “deepfakes” pose to election integrity. As rapidly evolving AI capabilities collide with the ongoing 2024 elections, federal and state policymakers increasingly are taking steps to protect the public from the threat of deceptive AI-generated political content.
Media generated by AI to imitate an individual’s voice or likeness present significant challenges for regulators. As deepfakes increasingly become indistinguishable from authentic content, members of Congress, federal regulatory agencies, and third-party stakeholders all have called for action to mitigate the threats deepfakes can pose for elections. Continue Reading As States Lead Efforts to Address Deepfakes in Political Ads, Federal Lawmakers Seek Nationwide Policies
California Senate Committee Advances Comprehensive AI Bill
On April 2, the California Senate Judiciary Committee held a hearing on the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047) and favorably reported the bill in a 9-0 vote (with 2 members not voting). The vote marks a major step toward comprehensive artificial intelligence (AI) regulation in a state that is home to both Silicon Valley and the nation’s first comprehensive privacy law.Continue Reading California Senate Committee Advances Comprehensive AI Bill
New Bipartisan House Task Force May Signal Legislative Momentum on Artificial Intelligence
On February 20, Speaker Mike Johnson (R-LA) and Democratic Leader Hakeem Jeffries (D-NY) announced a new Artificial Intelligence (AI) task force in the House of Representatives, with the goal of developing principles and policies to promote U.S. leadership and security with respect to AI. Rep. Jay Olbernolte (R-CA) will chair the task force, joined by Rep. Ted Lieu (D-CA) as co-chair. Several other senior members of the California delegation, including Rep. Darrell Issa (R-CA) and retiring Rep. Anna Eshoo (D-CA), will participate in the effort as well.Continue Reading New Bipartisan House Task Force May Signal Legislative Momentum on Artificial Intelligence