On May 2, 2024, the Federal Communications Commission (FCC) released a draft Notice of Proposed Rulemaking (NPRM) for consideration at the agency’s May 23 Open Meeting that proposes to “prohibit from recognition by the FCC and participation in [its] equipment authorization program, any [Telecommunications Certification Body (TCB)] or test lab in which an entity identified on the Covered List has direct or indirect ownership or control.”  The NPRM also would also direct of FCC’s Office of Engineering and Technology to “suspend the recognition of any TCB or test lab directly or indirectly owned or controlled by entities identified on the Covered List, thereby preventing such entities from using their owned or controlled labs to undermine our current prohibition on Covered Equipment.”

The NPRM would seek comment on “whether and how the Commission should consider national security determinations made in other Executive Branch agency lists in establishing eligibility qualifications for FCC recognition of a TCB or a test lab in our equipment authorization program.”  It also would “propose that the prohibition would be triggered by direct or indirect ownership or control of 10% or more” and that “TCBs and test labs would be required to report any entity that holds a 5% or greater direct or indirect equity and/or voting interest.”  The NPRM would also “propose to collect additional ownership and control information from TCBs and test labs” in order to implement the proposed national security prohibition.

The proposal follows a number of other recent FCC actions undertaken to address national security concerns pertaining to communications networks and devices.  FCC Chairwoman Jessica Rosenworcel and Commissioner Brendan Carr recently announced their support for the proposal.

As the 2024 elections approach and the window for Congress to consider bipartisan comprehensive artificial intelligence (AI) legislation shrinks, California officials are attempting to guard against a generative AI free-for-all—at least with respect to state government use of the rapidly advancing technology—by becoming the largest state to issue rules for state procurement of AI technologies.  Without nationwide federal rules, standards set by state government procurement rules may ultimately add another layer of complexity to the patchwork of AI-related rules and standards emerging in the states.

On March 21, 2024, the California Government Operations Agency (GovOps) published interim guidelines for government procurement of generative AI technologies.  The new guidance directs state officials responsible for awarding and managing public contracts to identify risks of generative AI, monitor the technology’s use, and train staff on acceptable use, including for procurements that only involve “incidental” AI elements.  For “intentional” generative AI procurements, where an agency is specifically seeking to purchase a generative AI product or service, the guidelines impose a higher standard: in addition to the requirements that apply to “incidental” purchases, agencies seeking generative AI technologies are responsible for articulating the need for using generative AI prior to procurement, testing the technology prior to implementation, and establishing a dedicated team to monitor the AI on an ongoing basis.

Continue Reading California Establishes Working Guidance for AI Procurement

On April 25, 2024, the UK’s Investigatory Powers (Amendment) Act 2024 (“IP(A)A”) received royal assent and became law.  This law makes the first substantive amendments to the existing Investigatory Powers Act 2016 (“IPA”) since it came into effect, and follows an independent review of the effectiveness of the IPA published in June 2023.

Continue Reading Changes to the UK investigatory powers regime receive royal assent

Updated April 30, 2024.  Originally posted March 18, 2024.

In March, the U.S. Federal Communications Commission (FCC) adopted a licensing framework that authorizes satellite operators to partner with terrestrial wireless providers to develop hybrid satellite-terrestrial networks intended to provide ubiquitous network connectivity, including in “dead zones” and other hard-to-reach areas.  Today’s Federal Register publication confirms that this new “Supplemental Coverage from Space” (SCS) regime will become effective Thursday, May 30, 2024, which will enable satellite operators to serve as a gap-filler in the networks of their wireless provider partners by using their satellite capability combined with spectrum previously allocated exclusively to terrestrial service.

Continue Reading FCC’s “Supplemental Coverage from Space” Rules Take Effect May 30; New Licensing Framework Expands Satellite-to-Smartphone Coverage

On April 3, 2024, the UK Information Commissioner’s Office (“ICO”) published its 2024-2025 Children’s code strategy (the “Strategy”), which sets out its priorities for protecting children’s personal information online. This builds on the Children’s code of practice (“Children’s Code”) which the ICO introduced in 2021 to ensure that all online services which process children’s data are designed in a manner that is safe for children.

Continue Reading ICO sets outs 2024-2025 priorities to protect children online

With the rapid evolution of artificial intelligence (AI) technology, the regulatory frameworks for AI in the Asia–Pacific (APAC) region continue to develop quickly. Policymakers and regulators have been prompted to consider either reviewing existing regulatory frameworks to ensure their effectiveness in addressing emerging risks brought by AI, or proposing new, AI-specific rules or regulations. Overall, there appears to be a trend across the region to promote AI uses and developments, with most jurisdictions focusing on high-level and principle-based guidance. While a few jurisdictions are considering regulations specific to AI, they are still at an early stage. Further, privacy regulators and some industry regulators, such as financial regulators, are starting to play a role in AI governance.

This blog post provides an overview of various approaches in regulating AI and managing AI-related risks in the APAC region.  

Continue Reading Overview of AI Regulatory Landscape in APAC

Senate Commerce Committee Chair Maria Cantwell (D-WA) and Senators Todd Young (R-IN), John Hickenlooper (D-CO), and Marsha Blackburn (R-TN) recently introduced the Future of AI Innovation Act, a legislative package that addresses key bipartisan priorities to promote AI safety, standardization, and access.  The bill would also advance U.S. leadership in AI by facilitating R&D and creating testbeds for AI systems.

Continue Reading New Bipartisan Senate Legislation Aims to Bolster U.S. AI Research and Deployment

A new post on the Covington Global Policy Watch blog discusses how Congress may overturn rules issued by the Executive Branch under the Congressional Review Act (CRA) and why the Biden Administration must finalize and publish certain rules to avoid them being eligible for CRA review.  In 2017, the Federal Communication Commission’s broadband privacy rules were repealed under CRA review.  You can read the post here.

A New Orleans magician recently made headlines for using artificial intelligence (AI) to  emulate President Biden’s voice without his consent in a misleading robocall to New Hampshire voters. This was not a magic trick, but rather a demonstration of the risks AI-generated “deepfakes” pose to election integrity.  As rapidly evolving AI capabilities collide with the ongoing 2024 elections, federal and state policymakers increasingly are taking steps to protect the public from the threat of deceptive AI-generated political content.

Media generated by AI to imitate an individual’s voice or likeness present significant challenges for regulators.  As deepfakes increasingly become indistinguishable from authentic content, members of Congress, federal regulatory agencies, and third-party stakeholders all have called for action to mitigate the threats deepfakes can pose for elections.  

Continue Reading As States Lead Efforts to Address Deepfakes in Political Ads, Federal Lawmakers Seek Nationwide Policies

On April 2, the California Senate Judiciary Committee held a hearing on the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047) and favorably reported the bill in a 9-0 vote (with 2 members not voting).  The vote marks a major step toward comprehensive artificial intelligence (AI) regulation in a state that is home to both Silicon Valley and the nation’s first comprehensive privacy law.

Continue Reading California Senate Committee Advances Comprehensive AI Bill