On 21 June 2023, at the close of a roundtable meeting of the G7 Data Protection and Privacy Authorities, regulators from the United States, France, Germany, Italy, United Kingdom, Canada and Japan published a joint “Statement on Generative AI” (“Statement”) (available here). In the Statement, regulators identify a range of data protection-related concerns they believe are raised by generative AI tools, including legal authority for processing personal information, and transparency, explainability, and security. The group of regulators also call on companies to “embed privacy in the design conception, operation, and management” of generative AI tools.

In advance of the G7 meeting, on 15 June 2023, the UK Information Commissioner’s Office (“ICO”) separately announced that it will be “checking” whether businesses have addressed privacy risks before deploying generative AI, and “taking action where there is risk of harm to people through poor use of their data”.

G7 Data Protection and Privacy Authorities

The Statement follows meetings of G7 countries in April and May 2023, where global leaders pledged to advance “international discussions on inclusive artificial intelligence (AI) governance and interoperability” (see our blog here for further details).

The Statement identifies a number of areas that data protection regulators consider generative AI tools may raise risks, including (among other topics):

  • Legal authority for the processing of personal information, particularly that of children, including in relation to the datasets used to train, validate and test generative AI models;
  • Security safeguards to protect against threats and attacks, such as those that seek to extract or reproduce personal information originally in the datasets used to train an AI model;
  • Transparency measures to promote openness and explainability in the operation of generative AI tools, especially in cases where such tools are used to make or assist in decision-making about individuals;
  • Accountability measures to ensure appropriate levels of responsibility among actors in the AI supply chain; and
  • Limiting the collection of personal data to only that which is necessary to fulfil the specified task.

Regulators also adopted an action plan setting out how they will collaborate over the 12 months until next year’s G7 meeting in Italy. During that period, regulators will hold further discussions on how to address the perceived privacy challenges of generative AI in working groups on emerging technologies and enforcement cooperation. As part of the action plan, the regulators also pledged to increase dialogues and cross-border enforcement cooperation amongst G7 authorities and the broader data protection community.

UK ICO

Following its publication of updated Guidance on AI and data protection in April 2023 (see our blog here for an overview), the ICO set out a list of eight questions that, according to the ICO, businesses developing or using generative AI that processes personal data “need to ask” themselves. In its blog post, the ICO emphasizes that existing data protection law applies to processing personal data that comes from publicly accessible sources, and commits to acting where businesses are “not following the law, and considering the impact on individuals”.

The ICO’s questions cover similar topics as the G7 Statement, including (among others):

  • Ensuring transparency – the ICO notes that businesses “must make information about the processing publicly accessible unless an exemption applies. If it does not take disproportionate effort, you must communicate this information directly to the individuals the data relates to.”
  • Mitigating security risks – in addition to mitigating the risk of personal data breaches, the ICO states that businesses “should consider and mitigate risks of model inversion and membership inference, data poisoning and other forms of adversarial attacks.”
  • Limiting unnecessary processing – the ICO advises that businesses “must collect only the data that is adequate to fulfil your stated purpose. The data should be relevant and limited to what is necessary.”
  • Preparing a Data Protection Impact Assessment (DPIA) – the ICO notes that businesses must assess and mitigate any data protection risks posed by generative AI tools via the DPIA process before starting to process personal data.

The ICO’s guidance forms part the UK’s wider approach to AI regulation, which requires existing regulators to take responsibility for promoting and overseeing responsible AI within their sectors (for further information on the UK Government’s approach to AI regulation, see our blog post here). In recent months, UK regulators across a range of sectors have issued statements on generative AI. For example, in June 2023, Ofcom, the UK’s communications regulator, published a guidance note on “What generative AI means for the communications sector”, and, in May 2023, the Competition and Markets Authority (CMA) launched an inquiry into foundation models, including generative AI (see our blog post here for further details).

***

Covington regularly advises the world’s top technology companies on their most challenging regulatory, compliance, and public policy issues in the EU, UK and other major markets. We are monitoring the EU and UK’s developments very closely and will be updating this site regularly – please watch this space for further updates.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Marianna Drake Marianna Drake

Marianna Drake counsels leading multinational companies on some of their most complex regulatory, policy and compliance-related issues, including data privacy and AI regulation. She focuses her practice on compliance with UK, EU and global privacy frameworks, and new policy proposals and regulations relating…

Marianna Drake counsels leading multinational companies on some of their most complex regulatory, policy and compliance-related issues, including data privacy and AI regulation. She focuses her practice on compliance with UK, EU and global privacy frameworks, and new policy proposals and regulations relating to AI and data. She also advises clients on matters relating to children’s privacy, online safety and consumer protection and product safety laws.

Her practice includes defending organizations in cross-border, contentious investigations and regulatory enforcement in the UK and EU Member States. Marianna also routinely partners with clients on the design of new products and services, drafting and negotiating privacy terms, developing privacy notices and consent forms, and helping clients design governance programs for the development and deployment of AI technologies.

Marianna’s pro bono work includes providing data protection advice to UK-based human rights charities, and supporting a non-profit organization in conducting legal research for strategic litigation.

Photo of Lisa Peets Lisa Peets

Lisa Peets leads the Technology Regulatory and Policy practice in the London office and is a member of the firm’s Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory counsel and legislative advocacy. In this context, she…

Lisa Peets leads the Technology Regulatory and Policy practice in the London office and is a member of the firm’s Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory counsel and legislative advocacy. In this context, she has worked closely with leading multinationals in a number of sectors, including many of the world’s best-known technology companies.

Lisa counsels clients on a range of EU law issues, including data protection and related regimes, copyright, e-commerce and consumer protection, and the rapidly expanding universe of EU rules applicable to existing and emerging technologies. Lisa also routinely advises clients in and outside of the technology sector on trade related matters, including EU trade controls rules.

According to the latest edition of Chambers UK (2022), “Lisa is able to make an incredibly quick legal assessment whereby she perfectly distils the essential matters from the less relevant elements.” “Lisa has subject matter expertise but is also able to think like a generalist and prioritise. She brings a strategic lens to matters.”

Photo of Marty Hansen Marty Hansen

Martin Hansen has represented some of the world’s leading information technology, telecommunications, and pharmaceutical companies on a broad range of cutting edge international trade, intellectual property, and competition issues. Martin has extensive experience in advising clients on matters arising under the World Trade…

Martin Hansen has represented some of the world’s leading information technology, telecommunications, and pharmaceutical companies on a broad range of cutting edge international trade, intellectual property, and competition issues. Martin has extensive experience in advising clients on matters arising under the World Trade Organization agreements, treaties administered by the World Intellectual Property Organization, bilateral and regional free trade agreements, and other trade agreements.

Drawing on ten years of experience in Covington’s London and DC offices his practice focuses on helping innovative companies solve challenges on intellectual property and trade matters before U.S. courts, the U.S. government, and foreign governments and tribunals. Martin also represents software companies and a leading IT trade association on electronic commerce, Internet security, and online liability issues.

Photo of Mark Young Mark Young

Mark Young, an experienced tech regulatory lawyer, advises major global companies on their most challenging data privacy compliance matters and investigations.

Mark also leads on EMEA cybersecurity matters at the firm. He advises on evolving cyber-related regulations, and helps clients respond to…

Mark Young, an experienced tech regulatory lawyer, advises major global companies on their most challenging data privacy compliance matters and investigations.

Mark also leads on EMEA cybersecurity matters at the firm. He advises on evolving cyber-related regulations, and helps clients respond to incidents, including personal data breaches, IP and trade secret theft, ransomware, insider threats, and state-sponsored attacks.

Mark has been recognized in Chambers UK for several years as “a trusted adviser – practical, results-oriented and an expert in the field;” “fast, thorough and responsive;” “extremely pragmatic in advice on risk;” and having “great insight into the regulators.”

Drawing on over 15 years of experience advising global companies on a variety of tech regulatory matters, Mark specializes in:

  • Advising on potential exposure under GDPR and international data privacy laws in relation to innovative products and services that involve cutting-edge technology (e.g., AI, biometric data, Internet-enabled devices, etc.).
  • Providing practical guidance on novel uses of personal data, responding to individuals exercising rights, and data transfers, including advising on Binding Corporate Rules (BCRs) and compliance challenges following Brexit and Schrems II.
    Helping clients respond to investigations by data protection regulators in the UK, EU and globally, and advising on potential follow-on litigation risks.
  • GDPR and international data privacy compliance for life sciences companies in relation to:
    clinical trials and pharmacovigilance;

    • digital health products and services; and
    • marketing programs.
    • International conflict of law issues relating to white collar investigations and data privacy compliance.
  • Cybersecurity issues, including:
    • best practices to protect business-critical information and comply with national and sector-specific regulation;
      preparing for and responding to cyber-based attacks and internal threats to networks and information, including training for board members;
    • supervising technical investigations; advising on PR, engagement with law enforcement and government agencies, notification obligations and other legal risks; and representing clients before regulators around the world; and
    • advising on emerging regulations, including during the legislative process.
  • Advising clients on risks and potential liabilities in relation to corporate transactions, especially involving companies that process significant volumes of personal data (e.g., in the adtech, digital identity/anti-fraud, and social network sectors.)
  • Providing strategic advice and advocacy on a range of EU technology law reform issues including data privacy, cybersecurity, ecommerce, eID and trust services, and software-related proposals.
  • Representing clients in connection with references to the Court of Justice of the EU.