Updated August 8, 2023.  Originally posted May 1, 2023.

Last week, comment deadlines were announced for a Federal Communications Commission (“FCC”) Order and Notice of Proposed Rulemaking (“NPRM”) that could have significant compliance implications for all holders of international Section 214 authority (i.e., authorization to provide telecommunications services from points in the U.S. to points abroad).  The rule changes on which the FCC seeks comment are far-reaching and, if adopted as written, could result in significant future compliance burdens, both for entities holding international Section 214 authority, as well as the parties holding ownership interests in these entities.  Comments on these rule changes are due Thursday, August 31, with reply comments due October 2.

Continue Reading Comments Due August 31 on FCC’s Proposal to Step Up Review of Foreign Ownership in Telecom Carriers and Establish Cybersecurity Requirements

In a new post on the Inside Class Actions blog, we summarize the UK Supreme Court’s recent judgment on litigation funding agreements, which could potentially have significant impact on collective proceedings and other funded cases in the UK. To read the post, please click here.

On Tuesday, July 25, 2023, the U.S. Department of Justice (“DOJ”) announced that it has finalized a notice of proposed rulemaking (“NPRM”) under Title II of the Americans with Disabilities Act (“ADA”) to establish clear technical accessibility standards for state and local governments’ websites and mobile applications (“apps”).  Although the text of the proposed rule has not yet been released, according to the White House, it “suggests clear technical standards, like including text descriptions of images so people using screen readers can understand the content, providing captions on videos, and enabling navigation through use of a keyboard instead of a mouse for those with limited use of their hands.”  Note that the proposed rule would apply to state and local government websites and apps only, but as discussed below this rulemaking could have a shadow effect on disputes about the accessibility of commercial websites and apps. 

Continue Reading Biden Administration Announces Rulemaking to Improve the Accessibility of Online Public Services for Americans with Disabilities

On July 18, 2023, Federal Communications Commission (FCC) Chairwoman Jessica Rosenworcel announced that she has circulated a proposal to the FCC’s commissioners to create “a voluntary cybersecurity labeling program that would provide consumers with clear information about the security of their Internet-enabled devices.”

Continue Reading FCC Chairwoman Rosenworcel Announces Proposed Voluntary Cybersecurity Labeling Program for Smart Devices

On July 7, 2023, the UK House of Lords’ Communications and Digital Committee (the “Committee”) announced an inquiry into Large Language Models (“LLMs”), a type of generative AI used for a wide range of purposes, including producing text, code and translations.  According to the Committee, they have launched the inquiry to understand “what needs to happen over the next 1–3 years to ensure the UK can respond to the opportunities and risks posed by large language models.

This inquiry is the first UK Parliament initiative to evaluate the UK Government’s “pro-innovation” approach to AI regulation, which empowers regulators to oversee AI within their respective sectors (as discussed in our blog here).  UK regulators have begun implementing the approach already.  For, example, the Information Commissioner’s Office has recently issued guidance on AI and data protection and generative AI tools that process personal data (see our blogs here and here for more details). 

Continue Reading UK House of Lords Announces Inquiry into Large Language Models

In a new strategy published on July 11, the European Commission has identified Web 4.0 and Virtual Worlds—often also referred to as the metaverse—as having the potential to transform the ways in which EU citizens live, work and interact.  The EU’s strategy consists of ten action points addressing four themes drawn from the Digital Decade policy programme and the Commission’s Connectivity package: (1) People and Skills; (2) Business; (3) Government (i.e., public services and projects); and (4) Governance.

The European Commission’s strategy indicates that it is unlikely to propose new regulation in the short to medium-term: indeed, European Competition Commissioner Margarethe Vestager has recently warned against jumping to regulation of Virtual Worlds as the “first sort of safety pad.” Instead, the Commission views its framework of current and upcoming digital technology-related legislation (including the GDPR, the Digital Services Act, the Digital Markets Act and the proposed Markets in Crypto-Assets Regulation) to be applicable to Web 4.0 and Virtual Worlds in a “robust” and “future-oriented” manner. 

Continue Reading European Commission Publishes New Strategy on Virtual Worlds

On July 10, 2023, the European Commission adopted its adequacy decision on the EU-U.S. Data Privacy Framework (“DPF”). The decision, which took effect on the day of its adoption, concludes that the United States ensures an adequate level of protection for personal data transferred from the EEA to companies certified to the DPF. This blog summarizes the key findings of the decision, what organizations wishing to certify to the DPF need to do and the process for certifying, as well as the impact on other transfer mechanisms such as the standard contractual clauses (“SCCs”), and on transfers from the UK and Switzerland.

Continue Reading European Commission Adopts Adequacy Decision on the EU-U.S. Data Privacy Framework

On July 13, 2023, the Cybersecurity Administration of China (“CAC”), in conjunction with six other agencies, jointly issued the Interim Administrative Measures for Generative Artificial Intelligence Services (《生成式人工智能管理暂行办法》) (“Generative AI Measures” or “Measures”) (official Chinese version here).  The Generative AI Measures are set to take effect on August 15, 2023. 

As the first comprehensive AI regulation in China, the Measures cover a wide range of topics touching upon how Generative AI Services are developed and how such services can be offered.  These topics range from AI governance, training data, tagging and labeling to data protection and user rights.  In this blog post, we will spotlight a few most important points that could potentially impact a company’s decision to develop and deploy their Generative AI Services in China.

This final version follows a first draft which was released for public consultation in April 2023 (see our previous post here). Several requirements were removed from the April 2023 draft, including, for example, the prohibition of user profiling, user real-name verification, and the requirement to take measures within three months through model optimization training to prevent illegal content from being generated again.  However, several provisions in the final version remain vague (potentially by design) and leave room to future regulatory guidance as the generative AI landscape continues to evolve.

Scope and Key Definitions

Article 2 of the Measures set out the scope of this regulation, which applies to the “provision of services of generating content in the form of text(s), picture(s), audio and video(s) to the public within China through the use of Generative AI Technologies.”  (Article 2).  In this context, the following definitions have been offered under Article 22:

  • Generative AI Technologies” are defined as models and related technologies that are capable of generating contents in the form of text(s), picture(s), audio and video(s).
  • Generative AI Services” refer to the service that is offered to the public and generates content in the form of text(s), picture(s), audio and video(s) by use of Generative AI Technologies.
  • Generative AI Service Provider” (“Provider”) refers to “an entity or individual that utilizes Generative AI Technologies to provide Generative AI Services, including providing Generative AI Services through application programming interface (API) or other methods.”  

Note “Provider” is broadly defined, and in theory, all entities in the ecosystem involved in provision of Generative AI Services to the Chinese public could be covered.  In practice, it could mean that both the developers of Generative AI Technologies, which make their services available for other entities to deploy in China, and entities actually using Generative AI Technologies to offer services to Chinese consumers, are subject to the jurisdiction of the Measures, so long as the Generative AI Technologies are used by the public within China.  The term “used by public in China” is not defined and will most likely be decided on a case-by-case basis until further regulatory guidance is issued.

Notably, the Measures exclude the development and use of Generative AI Services by enterprises, research and academic institutions, or other public institutions from the scope of its application.  Nevertheless, and as discussed above, the lines around this exception could be blurred if the term “used by domestic public” is broadly interpreted. 

Finally, no provisions in the Measures explicitly prohibit Chinese enterprises or even consumers from using Generative AI Services provided by offshore Providers.  The Measures, however, state that the Chinese regulators may take “technical measures and other necessary measures” to “deal with” offshore Generative AI Services if such services fail to “comply with the Chinese laws, regulations and the Generative AI Measures.”  (Article 20).  While this provision does not grant the Chinese regulators the authority to regulate offshore Providers per se (for example, to audit such services), in practice, it may pressure these Providers to comply with the Measures if they still wish to remain in the market.  Otherwise, access to such Generative AI Services from China could be blocked.

Key Requirements for Generative AI Service Providers

The Generative AI Measures impose a wide range of obligations on Providers of Generative AI Services.  Some relate to governance model of such Generative AI Services, including with respect to algorithm training and product development.  Others are more specifically related to the manner in which the Services are offered.  We highlight a few examples below:

  • Content Moderation:  Providers bear the responsibilities of “content producers” under the Measures. (Article 9).  This means that if a Provider identifies that a user of its Generative AI Service is engaged in “illegal content” (not clearly defined by the Measures or other Chinese regulations), it must promptly take measures to, for example, suspend the content generation and transmission, and take down the content.  In addition, the Providers must rectify the issue, including through model optimization, and must report the issue to regulators.  (Article 14).
  • Training Data:  The Measures impose several requirements on Providers related to training data.  For instance, data and “foundation models” used for training and optimization must be obtained from “legitimate sources.”  Providers are prohibited from infringing on the intellectual property rights of others, and must process personal information with consent or another legal basis under Chinese laws.  The Measures also state at a high level that the Providers must improve the quality of training data and enhance its “authenticity, accuracy, objectivity and diversity.”  (Article 7).  It is less clear how such requirements should be implemented at the development stage and what type of supporting documents Chinese regulators would consider to support the claims of the Providers.
  • Labeling of Training Data:  At the development stage, if the Providers is labeling training data, it must formulate “clear, specific and practical” labelling rules.  The Provider must also undertake a quality assessment of its data labeling and conduct sample verification to understand the accuracy of the labeled content.  (Articles 8).
  • Tagging of generated content: Consistent with the requirements under Provisions on the Management of Deep Synthesis in Internet Information Service, the Providers must add tags on content generated by Generative AI Services. (Article 12)
  • User Protection:  The Measures reflect several requirements for the Providers regarding user rights and protections, including:
  • Personal Information Protection:  Providers must not collect unnecessary personal information, store the input information and usage records in a way capable of identifying users, or provide users’ input information and usage records to others.  (Article 11). 
  • Complaints:  Providers must establish a mechanism for receiving and handling complaints from users.  Additionally, requests for access, copies, correction, or deletion of personal information from users should be handled in a timely manner.  (Articles 11 and 15).
  • Contracting:  Providers must implement a service agreement with the entity deploying its Generative AI Services.  The service agreement must specify the rights and obligations of the parties.  (Article 9).  There is no further guidance in the Measures on what needs to be included in such an agreement.
  • Security Assessment and Filing: While the Measures do not specifically identify any high risk services, Generative AI Services “with the attributes of public opinion or the capacity for social mobilization” are subject to the requirement to carry out security assessment and conduct algorithm filing.  (Article 17).  The precise scope of services subject to these requirements is not defined in the Measures, though information services that “provide channels for the public to express their opinions or are capable of mobilizing the public to engage in specific activities” could be the focus of the regulators based on other Chinese regulations.  These services could, for example, include operating Internet forums blogs, or chat rooms, or distributing information through public accounts, short videos, webcasts.

Enforcement While the penalty provisions in the Measures are in line with existing Chinese laws such as Cybersecurity Law, Data Security Law and Personal Information Protection Law, the Providers are required to cooperate with ”supervision and inspection” of regulators, including by “explaining the source, scale and types of training data; labeling rules and algorithmic mechanism;” and providing necessary support and assistance to regulators.  (Article 19).

On June 26, 2023, the National Telecommunications and Information Administration (“NTIA”) announced how it has allocated funding from the $42.45 billion Broadband Equity, Access, and Deployment (“BEAD”) program to all U.S. States, the District of Columbia, and five territories to deploy affordable, reliable high-speed Internet service.  Marking the occasion in a White House ceremony, President Biden declared that this investment will “connect everyone in America to [affordable] high-speed Internet. . . by 2030.”

By way of background, the Infrastructure Investment and Jobs Act (“IIJA”) became law in 2021 and directed NTIA to oversee distribution of the single greatest public investment in broadband in U.S. history.  The cornerstone of that investment is the BEAD program, which we detailed here.  In 2022, the NTIA released its Notice of Funding Opportunity (“NOFO”) for the BEAD program, marking the beginning of the program’s implementation, which we detailed here

According to U.S. Secretary of Commerce Gina Raimondo, the announced investments will increase competitiveness and spur economic growth by “connecting people to the digital economy, manufacturing fiber-optic cable in America, or creating good paying jobs building Internet infrastructure in the states.”  The NTIA announcement states that BEAD funding will be used to “deploy or upgrade broadband networks to ensure that everyone has access to reliable, affordable, high-speed Internet service.”  After meeting deployment goals, any remaining funds “can be used to pursue eligible access-, adoption-, and equity-related uses.”

The BEAD program is different from past federal broadband investments in that it will be administered by the States, D.C., and the five territories (each referred to as an “Eligible Entity”), with each jurisdiction running its own competitive process for determining the specific projects to be funded.  Under the IIJA, each Eligible Entity will have until the end of this year to submit an “initial proposal,” which will be a detailed roadmap explaining how they intend to run their grant programs in a manner consistent with the requirements of the IIJA and NTIA’s NOFO.  After approval of this initial proposal, an Eligible Entity can request access to at least 20 percent of its allocated funds. 

Continue Reading Biden Administration Presses Forward with $42.5 Billion Broadband Program