Data

Updated August 8, 2023.  Originally posted May 1, 2023.

Last week, comment deadlines were announced for a Federal Communications Commission (“FCC”) Order and Notice of Proposed Rulemaking (“NPRM”) that could have significant compliance implications for all holders of international Section 214 authority (i.e., authorization to provide telecommunications services from points in the U.S. to points abroad).  The rule changes on which the FCC seeks comment are far-reaching and, if adopted as written, could result in significant future compliance burdens, both for entities holding international Section 214 authority, as well as the parties holding ownership interests in these entities.  Comments on these rule changes are due Thursday, August 31, with reply comments due October 2.

Continue Reading Comments Due August 31 on FCC’s Proposal to Step Up Review of Foreign Ownership in Telecom Carriers and Establish Cybersecurity Requirements

On July 7, 2023, the UK House of Lords’ Communications and Digital Committee (the “Committee”) announced an inquiry into Large Language Models (“LLMs”), a type of generative AI used for a wide range of purposes, including producing text, code and translations.  According to the Committee, they have launched the inquiry to understand “what needs to happen over the next 1–3 years to ensure the UK can respond to the opportunities and risks posed by large language models.

This inquiry is the first UK Parliament initiative to evaluate the UK Government’s “pro-innovation” approach to AI regulation, which empowers regulators to oversee AI within their respective sectors (as discussed in our blog here).  UK regulators have begun implementing the approach already.  For, example, the Information Commissioner’s Office has recently issued guidance on AI and data protection and generative AI tools that process personal data (see our blogs here and here for more details). 

Continue Reading UK House of Lords Announces Inquiry into Large Language Models

On 21 June 2023, at the close of a roundtable meeting of the G7 Data Protection and Privacy Authorities, regulators from the United States, France, Germany, Italy, United Kingdom, Canada and Japan published a joint “Statement on Generative AI” (“Statement”) (available here). In the Statement, regulators identify a range of data protection-related concerns they believe are raised by generative AI tools, including legal authority for processing personal information, and transparency, explainability, and security. The group of regulators also call on companies to “embed privacy in the design conception, operation, and management” of generative AI tools.

In advance of the G7 meeting, on 15 June 2023, the UK Information Commissioner’s Office (“ICO”) separately announced that it will be “checking” whether businesses have addressed privacy risks before deploying generative AI, and “taking action where there is risk of harm to people through poor use of their data”.

Continue Reading UK and G7 Privacy Authorities Warn of Privacy Risks Raised by Generative AI

Late yesterday, the EU institutions reached political agreement on the European Data Act (see the European Commission’s press release here and the Council’s press release here).  The proposal for a Data Act was first tabled by the European Commission in February 2022 as a key piece of the European Strategy for Data (see our previous blogpost here). The Data Act will sit alongside the EU’s General Data Protection Regulation (“GDPR”), Data Governance Act, Digital Services Act, and the Digital Markets Act.

Continue Reading Political Agreement Reached on the European Data Act

On 29 March 2023, the UK Information Commissioner’s Office (“ICO”) published updated Guidance on AI and data protection (the “Guidance”) following “requests from UK industry to clarify requirements for fairness in AI”. AI has been a strategic priority for the ICO for several years. In 2020, the ICO published its first set of guidance on AI (as discussed in our blog post here) which it complemented with supplementary recommendations on Explaining Decisions Made with AI and an AI and Data Protection risk toolkit in 2022. The updated Guidance forms part of the UK’s wider efforts to adopt a “pro-innovation” approach to AI regulation which will require existing regulators to take responsibility for promoting and overseeing responsible AI within their sectors (for further information on the UK Government’s approach to AI regulation, see our blog post here).

The updated Guidance covers the ICO’s view of best practice for data protection-compliant AI, as well as how the ICO interprets data protection law in the context of AI systems that process personal data. The Guidance has been restructured in line with the UK GDPR’s data protection principles, and features new content, including guidance on fairness, transparency, lawfulness and accountability when using AI systems.

Continue Reading UK ICO Updates Guidance on Artificial Intelligence and Data Protection

On 29 March 2023, the UK Government published a White Paper entitled “A pro-innovation approach to AI regulation” (“White Paper”). The White Paper elaborates on the approach to AI set out by the Government in its 2022 AI Governance and Regulation Policy Statement (“Policy Statement” – covered in our blog post here). This announcement comes following the Government’s commitments, in the Spring Budget 2023, to build an expert taskforce to develop the UK’s capabilities in AI foundation models and produce guidance on the relationship between intellectual property law and generative AI (for more details of these initiatives, see here).

In its White Paper, the UK Government confirms that, unlike the EU, it does not plan to adopt new legislation to regulate AI, nor will it create a new regulator for AI (for further details on the EU’s proposed AI regulation see our blog posts here and here). Instead, the UK would require existing regulators, including the UK Information Commissioner’s Office (“ICO”), to take responsibility for the establishment, promotion, and oversight of responsible AI in their respective sectors. Regulators’ activities would be reinforced by the establishment of new support and oversight functions within central Government. This approach is already beginning to play out in certain regulated areas in the UK. For example, in October 2022, the Bank of England and Financial Conduct Authority (“FCA”) jointly released a Discussion Paper on Artificial Intelligence and Machine Learning considering how AI in financial services should be regulated and, in March 2023, the ICO updated its Guidance on AI and Data Protection.  

Continue Reading UK Government Adopts a “Pro-Innovation” Approach to AI Regulation

On April 11, 2023, the Cyberspace Administration of China (“CAC”) released draft Administrative Measures for Generative Artificial Intelligence Services (《生成式人工智能服务管理办法(征求意见稿)》) (“draft Measures”) (official Chinese version available here) for public consultation.  The deadline for submitting comments is May 10, 2023.

Continue Reading China Proposes Draft Measures to Regulate Generative AI

This quarterly update summarizes key legislative and regulatory developments in the first quarter of 2023 related to Artificial Intelligence (“AI”), the Internet of Things (“IoT”), connected and autonomous vehicles (“CAVs”), and data privacy and cybersecurity.

Continue Reading U.S. AI, IoT, CAV, and Privacy & Cybersecurity Legislative & Regulatory Update – First Quarter 2023

U.S. federal agencies and working groups have promulgated a number of issuances in January 2023 related to the development and use of artificial intelligence (“AI”) systems.  These updates join proposals in Congress to pass legislation related to AI.  Specifically, in January 2023, the Department of Defense (“DoD”) updated Department of Defense Directive 3000.09 and the National Artificial Intelligence Research Resource (“NAIRR”) Task Force Final Report on AI; the National Institute of Standards and Technology (“NIST”) released its AI Risk Management Framework, each discussed below.

Continue Reading Roundup of January 2023 Artificial Intelligence Developments

On January 26, 2023, the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) released its Artificial Intelligence Risk Management Framework (the “Framework”) guidance document, alongside a companion AI RMF Playbook that suggests ways to navigate and use the Framework.  The goal of the Framework is to provide a resource to organizations “designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems.”  NIST aims for the Framework to offer a practical resource that can be adapted as the AI technologies continue to develop.  The release of the Framework follows the release of previous drafts and opportunities for public comment.  An initial draft of the Framework was released in March 2022 and a second draft was released in August 2022, prior to the official launch of version 1.0 of the Framework (NIST AI 100-1).

Continue Reading NIST Releases New Artificial Intelligence Risk Management Framework