Photo of Samuel Klein

Samuel Klein

Samuel Klein helps clients realize their policy objectives, manage reputational risks, and navigate the regulatory environment governing political engagement.

As a member of Covington’s Election and Political Law practice, Sam assists clients facing Congressional investigations and offers guidance on ethics laws; with the firm’s Public Policy group, Sam supports strategic advocacy across a breadth of policy domains at the federal, state, and local levels.

Sam spent one year as a law clerk at the Federal Election Commission. His prior experience includes serving as an intern to two senior members of Congress and helping clients communicate nuanced policy concepts to lawmakers and stakeholders as a public-affairs consultant.

Nearly a year after Senate Majority Leader Chuck Schumer (D-NY) launched the SAFE Innovation Framework for artificial intelligence (AI) with Senators Mike Rounds (R-SD), Martin Heinrich (D-NM), and Todd Young (R-IN), the bipartisan group has released a 31-page “Roadmap” for AI policy.  The overarching theme of the Roadmap is “harnessing the full potential of AI while minimizing the risks of AI in the near and long term.”

In contrast to Europe’s approach to regulating AI, the Roadmap does not propose or even contemplate a comprehensive AI law.  Rather, it identifies key themes and areas of agreement and directs the relevant congressional committees of jurisdiction to legislate on key issues.  The Roadmap recommendations are informed by the nine AI Insight Forums that the bipartisan group convened over the last year.Continue Reading Bipartisan Senate AI Roadmap Released

As the 2024 elections approach and the window for Congress to consider bipartisan comprehensive artificial intelligence (AI) legislation shrinks, California officials are attempting to guard against a generative AI free-for-all—at least with respect to state government use of the rapidly advancing technology—by becoming the largest state to issue rules for state procurement of AI technologies.  Without nationwide federal rules, standards set by state government procurement rules may ultimately add another layer of complexity to the patchwork of AI-related rules and standards emerging in the states.

On March 21, 2024, the California Government Operations Agency (GovOps) published interim guidelines for government procurement of generative AI technologies.  The new guidance directs state officials responsible for awarding and managing public contracts to identify risks of generative AI, monitor the technology’s use, and train staff on acceptable use, including for procurements that only involve “incidental” AI elements.  For “intentional” generative AI procurements, where an agency is specifically seeking to purchase a generative AI product or service, the guidelines impose a higher standard: in addition to the requirements that apply to “incidental” purchases, agencies seeking generative AI technologies are responsible for articulating the need for using generative AI prior to procurement, testing the technology prior to implementation, and establishing a dedicated team to monitor the AI on an ongoing basis.Continue Reading California Establishes Working Guidance for AI Procurement

Senate Commerce Committee Chair Maria Cantwell (D-WA) and Senators Todd Young (R-IN), John Hickenlooper (D-CO), and Marsha Blackburn (R-TN) recently introduced the Future of AI Innovation Act, a legislative package that addresses key bipartisan priorities to promote AI safety, standardization, and access.  The bill would also advance U.S. leadership in AI by facilitating R&D and creating testbeds for AI systems.Continue Reading New Bipartisan Senate Legislation Aims to Bolster U.S. AI Research and Deployment

A New Orleans magician recently made headlines for using artificial intelligence (AI) to  emulate President Biden’s voice without his consent in a misleading robocall to New Hampshire voters. This was not a magic trick, but rather a demonstration of the risks AI-generated “deepfakes” pose to election integrity.  As rapidly evolving AI capabilities collide with the ongoing 2024 elections, federal and state policymakers increasingly are taking steps to protect the public from the threat of deceptive AI-generated political content.

Media generated by AI to imitate an individual’s voice or likeness present significant challenges for regulators.  As deepfakes increasingly become indistinguishable from authentic content, members of Congress, federal regulatory agencies, and third-party stakeholders all have called for action to mitigate the threats deepfakes can pose for elections.  Continue Reading As States Lead Efforts to Address Deepfakes in Political Ads, Federal Lawmakers Seek Nationwide Policies

On February 20, Speaker Mike Johnson (R-LA) and Democratic Leader Hakeem Jeffries (D-NY) announced a new Artificial Intelligence (AI) task force in the House of Representatives, with the goal of developing principles and policies to promote U.S. leadership and security with respect to AI.  Rep. Jay Olbernolte (R-CA) will chair the task force, joined by Rep. Ted Lieu (D-CA) as co-chair.  Several other senior members of the California delegation, including Rep. Darrell Issa (R-CA) and retiring Rep. Anna Eshoo (D-CA), will participate in the effort as well.Continue Reading New Bipartisan House Task Force May Signal Legislative Momentum on Artificial Intelligence

Recently, a bipartisan group of U.S. senators introduced new legislation to address transparency and accountability for artificial intelligence (AI) systems, including those deployed for certain “critical impact” use cases. While many other targeted, bipartisan AI bills have been introduced in both chambers of Congress, this bill appears to be one of the first to propose specific legislative text for broadly regulating AI testing and use across industries.Continue Reading Bipartisan group of Senators introduce new AI transparency legislation