In a new strategy published on July 11, the European Commission has identified Web 4.0 and Virtual Worlds—often also referred to as the metaverse—as having the potential to transform the ways in which EU citizens live, work and interact.  The EU’s strategy consists of ten action points addressing four themes drawn from the Digital Decade policy programme and the Commission’s Connectivity package: (1) People and Skills; (2) Business; (3) Government (i.e., public services and projects); and (4) Governance.

The European Commission’s strategy indicates that it is unlikely to propose new regulation in the short to medium-term: indeed, European Competition Commissioner Margarethe Vestager has recently warned against jumping to regulation of Virtual Worlds as the “first sort of safety pad.” Instead, the Commission views its framework of current and upcoming digital technology-related legislation (including the GDPR, the Digital Services Act, the Digital Markets Act and the proposed Markets in Crypto-Assets Regulation) to be applicable to Web 4.0 and Virtual Worlds in a “robust” and “future-oriented” manner. 

Continue Reading European Commission Publishes New Strategy on Virtual Worlds

On July 10, 2023, the European Commission adopted its adequacy decision on the EU-U.S. Data Privacy Framework (“DPF”). The decision, which took effect on the day of its adoption, concludes that the United States ensures an adequate level of protection for personal data transferred from the EEA to companies certified to the DPF. This blog summarizes the key findings of the decision, what organizations wishing to certify to the DPF need to do and the process for certifying, as well as the impact on other transfer mechanisms such as the standard contractual clauses (“SCCs”), and on transfers from the UK and Switzerland.

Continue Reading European Commission Adopts Adequacy Decision on the EU-U.S. Data Privacy Framework

On July 13, 2023, the Cybersecurity Administration of China (“CAC”), in conjunction with six other agencies, jointly issued the Interim Administrative Measures for Generative Artificial Intelligence Services (《生成式人工智能管理暂行办法》) (“Generative AI Measures” or “Measures”) (official Chinese version here).  The Generative AI Measures are set to take effect on August 15, 2023. 

As the first comprehensive AI regulation in China, the Measures cover a wide range of topics touching upon how Generative AI Services are developed and how such services can be offered.  These topics range from AI governance, training data, tagging and labeling to data protection and user rights.  In this blog post, we will spotlight a few most important points that could potentially impact a company’s decision to develop and deploy their Generative AI Services in China.

This final version follows a first draft which was released for public consultation in April 2023 (see our previous post here). Several requirements were removed from the April 2023 draft, including, for example, the prohibition of user profiling, user real-name verification, and the requirement to take measures within three months through model optimization training to prevent illegal content from being generated again.  However, several provisions in the final version remain vague (potentially by design) and leave room to future regulatory guidance as the generative AI landscape continues to evolve.

Scope and Key Definitions

Article 2 of the Measures set out the scope of this regulation, which applies to the “provision of services of generating content in the form of text(s), picture(s), audio and video(s) to the public within China through the use of Generative AI Technologies.”  (Article 2).  In this context, the following definitions have been offered under Article 22:

  • Generative AI Technologies” are defined as models and related technologies that are capable of generating contents in the form of text(s), picture(s), audio and video(s).
  • Generative AI Services” refer to the service that is offered to the public and generates content in the form of text(s), picture(s), audio and video(s) by use of Generative AI Technologies.
  • Generative AI Service Provider” (“Provider”) refers to “an entity or individual that utilizes Generative AI Technologies to provide Generative AI Services, including providing Generative AI Services through application programming interface (API) or other methods.”  

Note “Provider” is broadly defined, and in theory, all entities in the ecosystem involved in provision of Generative AI Services to the Chinese public could be covered.  In practice, it could mean that both the developers of Generative AI Technologies, which make their services available for other entities to deploy in China, and entities actually using Generative AI Technologies to offer services to Chinese consumers, are subject to the jurisdiction of the Measures, so long as the Generative AI Technologies are used by the public within China.  The term “used by public in China” is not defined and will most likely be decided on a case-by-case basis until further regulatory guidance is issued.

Notably, the Measures exclude the development and use of Generative AI Services by enterprises, research and academic institutions, or other public institutions from the scope of its application.  Nevertheless, and as discussed above, the lines around this exception could be blurred if the term “used by domestic public” is broadly interpreted. 

Finally, no provisions in the Measures explicitly prohibit Chinese enterprises or even consumers from using Generative AI Services provided by offshore Providers.  The Measures, however, state that the Chinese regulators may take “technical measures and other necessary measures” to “deal with” offshore Generative AI Services if such services fail to “comply with the Chinese laws, regulations and the Generative AI Measures.”  (Article 20).  While this provision does not grant the Chinese regulators the authority to regulate offshore Providers per se (for example, to audit such services), in practice, it may pressure these Providers to comply with the Measures if they still wish to remain in the market.  Otherwise, access to such Generative AI Services from China could be blocked.

Key Requirements for Generative AI Service Providers

The Generative AI Measures impose a wide range of obligations on Providers of Generative AI Services.  Some relate to governance model of such Generative AI Services, including with respect to algorithm training and product development.  Others are more specifically related to the manner in which the Services are offered.  We highlight a few examples below:

  • Content Moderation:  Providers bear the responsibilities of “content producers” under the Measures. (Article 9).  This means that if a Provider identifies that a user of its Generative AI Service is engaged in “illegal content” (not clearly defined by the Measures or other Chinese regulations), it must promptly take measures to, for example, suspend the content generation and transmission, and take down the content.  In addition, the Providers must rectify the issue, including through model optimization, and must report the issue to regulators.  (Article 14).
  • Training Data:  The Measures impose several requirements on Providers related to training data.  For instance, data and “foundation models” used for training and optimization must be obtained from “legitimate sources.”  Providers are prohibited from infringing on the intellectual property rights of others, and must process personal information with consent or another legal basis under Chinese laws.  The Measures also state at a high level that the Providers must improve the quality of training data and enhance its “authenticity, accuracy, objectivity and diversity.”  (Article 7).  It is less clear how such requirements should be implemented at the development stage and what type of supporting documents Chinese regulators would consider to support the claims of the Providers.
  • Labeling of Training Data:  At the development stage, if the Providers is labeling training data, it must formulate “clear, specific and practical” labelling rules.  The Provider must also undertake a quality assessment of its data labeling and conduct sample verification to understand the accuracy of the labeled content.  (Articles 8).
  • Tagging of generated content: Consistent with the requirements under Provisions on the Management of Deep Synthesis in Internet Information Service, the Providers must add tags on content generated by Generative AI Services. (Article 12)
  • User Protection:  The Measures reflect several requirements for the Providers regarding user rights and protections, including:
  • Personal Information Protection:  Providers must not collect unnecessary personal information, store the input information and usage records in a way capable of identifying users, or provide users’ input information and usage records to others.  (Article 11). 
  • Complaints:  Providers must establish a mechanism for receiving and handling complaints from users.  Additionally, requests for access, copies, correction, or deletion of personal information from users should be handled in a timely manner.  (Articles 11 and 15).
  • Contracting:  Providers must implement a service agreement with the entity deploying its Generative AI Services.  The service agreement must specify the rights and obligations of the parties.  (Article 9).  There is no further guidance in the Measures on what needs to be included in such an agreement.
  • Security Assessment and Filing: While the Measures do not specifically identify any high risk services, Generative AI Services “with the attributes of public opinion or the capacity for social mobilization” are subject to the requirement to carry out security assessment and conduct algorithm filing.  (Article 17).  The precise scope of services subject to these requirements is not defined in the Measures, though information services that “provide channels for the public to express their opinions or are capable of mobilizing the public to engage in specific activities” could be the focus of the regulators based on other Chinese regulations.  These services could, for example, include operating Internet forums blogs, or chat rooms, or distributing information through public accounts, short videos, webcasts.

Enforcement While the penalty provisions in the Measures are in line with existing Chinese laws such as Cybersecurity Law, Data Security Law and Personal Information Protection Law, the Providers are required to cooperate with ”supervision and inspection” of regulators, including by “explaining the source, scale and types of training data; labeling rules and algorithmic mechanism;” and providing necessary support and assistance to regulators.  (Article 19).

On June 26, 2023, the National Telecommunications and Information Administration (“NTIA”) announced how it has allocated funding from the $42.45 billion Broadband Equity, Access, and Deployment (“BEAD”) program to all U.S. States, the District of Columbia, and five territories to deploy affordable, reliable high-speed Internet service.  Marking the occasion in a White House ceremony, President Biden declared that this investment will “connect everyone in America to [affordable] high-speed Internet. . . by 2030.”

By way of background, the Infrastructure Investment and Jobs Act (“IIJA”) became law in 2021 and directed NTIA to oversee distribution of the single greatest public investment in broadband in U.S. history.  The cornerstone of that investment is the BEAD program, which we detailed here.  In 2022, the NTIA released its Notice of Funding Opportunity (“NOFO”) for the BEAD program, marking the beginning of the program’s implementation, which we detailed here

According to U.S. Secretary of Commerce Gina Raimondo, the announced investments will increase competitiveness and spur economic growth by “connecting people to the digital economy, manufacturing fiber-optic cable in America, or creating good paying jobs building Internet infrastructure in the states.”  The NTIA announcement states that BEAD funding will be used to “deploy or upgrade broadband networks to ensure that everyone has access to reliable, affordable, high-speed Internet service.”  After meeting deployment goals, any remaining funds “can be used to pursue eligible access-, adoption-, and equity-related uses.”

The BEAD program is different from past federal broadband investments in that it will be administered by the States, D.C., and the five territories (each referred to as an “Eligible Entity”), with each jurisdiction running its own competitive process for determining the specific projects to be funded.  Under the IIJA, each Eligible Entity will have until the end of this year to submit an “initial proposal,” which will be a detailed roadmap explaining how they intend to run their grant programs in a manner consistent with the requirements of the IIJA and NTIA’s NOFO.  After approval of this initial proposal, an Eligible Entity can request access to at least 20 percent of its allocated funds. 

Continue Reading Biden Administration Presses Forward with $42.5 Billion Broadband Program

On 21 June 2023, at the close of a roundtable meeting of the G7 Data Protection and Privacy Authorities, regulators from the United States, France, Germany, Italy, United Kingdom, Canada and Japan published a joint “Statement on Generative AI” (“Statement”) (available here). In the Statement, regulators identify a range of data protection-related concerns they believe are raised by generative AI tools, including legal authority for processing personal information, and transparency, explainability, and security. The group of regulators also call on companies to “embed privacy in the design conception, operation, and management” of generative AI tools.

In advance of the G7 meeting, on 15 June 2023, the UK Information Commissioner’s Office (“ICO”) separately announced that it will be “checking” whether businesses have addressed privacy risks before deploying generative AI, and “taking action where there is risk of harm to people through poor use of their data”.

Continue Reading UK and G7 Privacy Authorities Warn of Privacy Risks Raised by Generative AI

On June 20, 2023, the Federal Communications Commission (“FCC”) released a Notice of Proposed Rulemaking (“NPRM”) to require cable operators and direct broadcast satellite (“DBS”) providers to display an “all-in” price for their video programming services in their billing and marketing materials.  The White House issued a press release that same day expressing its support for the proposed new rules, noting that the proposal is consistent with the Administration’s efforts “to crack down on junk fees in order to increase transparency.” 

Continue Reading FCC Proposes “All-In” Pricing Rules for Cable/Satellite TV

Late yesterday, the EU institutions reached political agreement on the European Data Act (see the European Commission’s press release here and the Council’s press release here).  The proposal for a Data Act was first tabled by the European Commission in February 2022 as a key piece of the European Strategy for Data (see our previous blogpost here). The Data Act will sit alongside the EU’s General Data Protection Regulation (“GDPR”), Data Governance Act, Digital Services Act, and the Digital Markets Act.

Continue Reading Political Agreement Reached on the European Data Act

The Federal Communications Commission and National Science Foundation announced this week that they will co-host a workshop on July 13, 2023, entitled “The Opportunities and Challenges of Artificial Intelligence for Communications Networks and Consumers.”

Per the press release, the workshop will cover a number of issues, including “AI’s transformative potential to optimize network traffic; improve spectrum policy and facilitate sharing; and enhance resiliency through self-healing networks” as well as “how AI will affect the fight against illegal robocalls and robotexts; efforts to foster digital equity and combat discrimination; and initiatives to bring greater transparency and affordability to broadband access.”

Last week, the Federal Communications Commission (“FCC”) released a Report and Order, Notice of Proposed Rulemaking, and Order that seeks “to ensure that video conferencing is accessible to all.”  The action establishes that video conferencing services, including popular platforms used by millions of Americans every day for work, school, healthcare, and more, fall within the definition of “interoperable video conferencing service” set forth in the Twenty-First Century Communications and Video Accessibility Act of 2010 (“CVAA”).  It also seeks comment on performance standards for interoperable video conferencing services and proposes to amend the FCC’s telecommunications relay services (“TRS”) rules to facilitate the use of video relay services (“VRS”) in video conferences.  Finally, the FCC granted a partial waiver of the VRS privacy screen rule to allow VRS users participating in a video conference to turn off their cameras when not presenting.  The item garnered unanimous support from the Commission.

Continue Reading FCC Updates Rules to “Ensure that Video Conferencing is Accessible to All”