On 21 June 2023, at the close of a roundtable meeting of the G7 Data Protection and Privacy Authorities, regulators from the United States, France, Germany, Italy, United Kingdom, Canada and Japan published a joint “Statement on Generative AI” (“Statement”) (available here). In the Statement, regulators identify a range of data protection-related concerns they believe are raised by generative AI tools, including legal authority for processing personal information, and transparency, explainability, and security. The group of regulators also call on companies to “embed privacy in the design conception, operation, and management” of generative AI tools.

In advance of the G7 meeting, on 15 June 2023, the UK Information Commissioner’s Office (“ICO”) separately announced that it will be “checking” whether businesses have addressed privacy risks before deploying generative AI, and “taking action where there is risk of harm to people through poor use of their data”.

G7 Data Protection and Privacy Authorities

The Statement follows meetings of G7 countries in April and May 2023, where global leaders pledged to advance “international discussions on inclusive artificial intelligence (AI) governance and interoperability” (see our blog here for further details).

The Statement identifies a number of areas that data protection regulators consider generative AI tools may raise risks, including (among other topics):

  • Legal authority for the processing of personal information, particularly that of children, including in relation to the datasets used to train, validate and test generative AI models;
  • Security safeguards to protect against threats and attacks, such as those that seek to extract or reproduce personal information originally in the datasets used to train an AI model;
  • Transparency measures to promote openness and explainability in the operation of generative AI tools, especially in cases where such tools are used to make or assist in decision-making about individuals;
  • Accountability measures to ensure appropriate levels of responsibility among actors in the AI supply chain; and
  • Limiting the collection of personal data to only that which is necessary to fulfil the specified task.

Regulators also adopted an action plan setting out how they will collaborate over the 12 months until next year’s G7 meeting in Italy. During that period, regulators will hold further discussions on how to address the perceived privacy challenges of generative AI in working groups on emerging technologies and enforcement cooperation. As part of the action plan, the regulators also pledged to increase dialogues and cross-border enforcement cooperation amongst G7 authorities and the broader data protection community.

UK ICO

Following its publication of updated Guidance on AI and data protection in April 2023 (see our blog here for an overview), the ICO set out a list of eight questions that, according to the ICO, businesses developing or using generative AI that processes personal data “need to ask” themselves. In its blog post, the ICO emphasizes that existing data protection law applies to processing personal data that comes from publicly accessible sources, and commits to acting where businesses are “not following the law, and considering the impact on individuals”.

The ICO’s questions cover similar topics as the G7 Statement, including (among others):

  • Ensuring transparency – the ICO notes that businesses “must make information about the processing publicly accessible unless an exemption applies. If it does not take disproportionate effort, you must communicate this information directly to the individuals the data relates to.”
  • Mitigating security risks – in addition to mitigating the risk of personal data breaches, the ICO states that businesses “should consider and mitigate risks of model inversion and membership inference, data poisoning and other forms of adversarial attacks.”
  • Limiting unnecessary processing – the ICO advises that businesses “must collect only the data that is adequate to fulfil your stated purpose. The data should be relevant and limited to what is necessary.”
  • Preparing a Data Protection Impact Assessment (DPIA) – the ICO notes that businesses must assess and mitigate any data protection risks posed by generative AI tools via the DPIA process before starting to process personal data.

The ICO’s guidance forms part the UK’s wider approach to AI regulation, which requires existing regulators to take responsibility for promoting and overseeing responsible AI within their sectors (for further information on the UK Government’s approach to AI regulation, see our blog post here). In recent months, UK regulators across a range of sectors have issued statements on generative AI. For example, in June 2023, Ofcom, the UK’s communications regulator, published a guidance note on “What generative AI means for the communications sector”, and, in May 2023, the Competition and Markets Authority (CMA) launched an inquiry into foundation models, including generative AI (see our blog post here for further details).

***

Covington regularly advises the world’s top technology companies on their most challenging regulatory, compliance, and public policy issues in the EU, UK and other major markets. We are monitoring the EU and UK’s developments very closely and will be updating this site regularly – please watch this space for further updates.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Marianna Drake Marianna Drake

Marianna Drake counsels leading multinational companies on some of their most complex regulatory, policy and compliance-related issues, including data privacy and AI regulation. She focuses her practice on compliance with UK, EU and global privacy frameworks, and new policy proposals and regulations relating…

Marianna Drake counsels leading multinational companies on some of their most complex regulatory, policy and compliance-related issues, including data privacy and AI regulation. She focuses her practice on compliance with UK, EU and global privacy frameworks, and new policy proposals and regulations relating to AI and data. She also advises clients on matters relating to children’s privacy, online safety and consumer protection and product safety laws.

Her practice includes defending organizations in cross-border, contentious investigations and regulatory enforcement in the UK and EU Member States. Marianna also routinely partners with clients on the design of new products and services, drafting and negotiating privacy terms, developing privacy notices and consent forms, and helping clients design governance programs for the development and deployment of AI technologies.

Marianna’s pro bono work includes providing data protection advice to UK-based human rights charities, and supporting a non-profit organization in conducting legal research for strategic litigation.

Photo of Lisa Peets Lisa Peets

Lisa Peets is co-chair of the firm’s Technology and Communications Regulation Practice Group and a member of the firm’s global Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory compliance and investigations alongside legislative advocacy. In this…

Lisa Peets is co-chair of the firm’s Technology and Communications Regulation Practice Group and a member of the firm’s global Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory compliance and investigations alongside legislative advocacy. In this context, she has worked closely with many of the world’s best-known technology companies.

Lisa counsels clients on a range of EU and UK legal frameworks affecting technology providers, including data protection, content moderation, platform regulation, copyright, e-commerce and consumer protection, and the rapidly expanding universe of additional rules applicable to technology, data and online services. Lisa also routinely advises clients in and outside of the technology sector on trade related matters, including EU trade controls rules.

According to Chambers UK (2024 edition), “Lisa provides an excellent service and familiarity with client needs.”

Photo of Marty Hansen Marty Hansen

Martin Hansen has over two decades of experience representing some of the world’s leading innovative companies in the internet, IT, e-commerce, and life sciences sectors on a broad range of regulatory, intellectual property, and competition issues. Martin has extensive experience in advising clients…

Martin Hansen has over two decades of experience representing some of the world’s leading innovative companies in the internet, IT, e-commerce, and life sciences sectors on a broad range of regulatory, intellectual property, and competition issues. Martin has extensive experience in advising clients on matters arising under EU and U.S. law, UK law, the World Trade Organization agreements, and other trade agreements.

Photo of Mark Young Mark Young

Mark Young is an experienced tech regulatory lawyer and a vice-chair of Covington’s Data Privacy and Cybersecurity Practice Group. He advises major global companies on their most challenging data privacy compliance matters and investigations. Mark also leads on EMEA cybersecurity matters at the…

Mark Young is an experienced tech regulatory lawyer and a vice-chair of Covington’s Data Privacy and Cybersecurity Practice Group. He advises major global companies on their most challenging data privacy compliance matters and investigations. Mark also leads on EMEA cybersecurity matters at the firm. In these contexts, he has worked closely with some of the world’s leading technology and life sciences companies and other multinationals.

Mark has been recognized for several years in Chambers UK as “a trusted adviser – practical, results-oriented and an expert in the field;” “fast, thorough and responsive;” “extremely pragmatic in advice on risk;” “provides thoughtful, strategic guidance and is a pleasure to work with;” and has “great insight into the regulators.” According to the most recent edition (2024), “He’s extremely technologically sophisticated and advises on true issues of first impression, particularly in the field of AI.”

Drawing on over 15 years of experience, Mark specializes in:

  • Advising on potential exposure under GDPR and international data privacy laws in relation to innovative products and services that involve cutting-edge technology, e.g., AI, biometric data, and connected devices.
  • Providing practical guidance on novel uses of personal data, responding to individuals exercising rights, and data transfers, including advising on Binding Corporate Rules (BCRs) and compliance challenges following Brexit and Schrems II.
  • Helping clients respond to investigations by data protection regulators in the UK, EU and globally, and advising on potential follow-on litigation risks.
  • Counseling ad networks (demand and supply side), retailers, and other adtech companies on data privacy compliance relating to programmatic advertising, and providing strategic advice on complaints and claims in a range of jurisdictions.
  • Advising life sciences companies on industry-specific data privacy issues, including:
    • clinical trials and pharmacovigilance;
    • digital health products and services; and
    • engagement with healthcare professionals and marketing programs.
  • International conflict of law issues relating to white collar investigations and data privacy compliance (collecting data from employees and others, international transfers, etc.).
  • Advising various clients on the EU NIS2 Directive and UK NIS regulations and other cybersecurity-related regulations, particularly (i) cloud computing service providers, online marketplaces, social media networks, and other digital infrastructure and service providers, and (ii) medical device and pharma companies, and other manufacturers.
  • Helping a broad range of organizations prepare for and respond to cybersecurity incidents, including personal data breaches, IP and trade secret theft, ransomware, insider threats, supply chain incidents, and state-sponsored attacks. Mark’s incident response expertise includes:
    • supervising technical investigations and providing updates to company boards and leaders;
    • advising on PR and related legal risks following an incident;
    • engaging with law enforcement and government agencies; and
    • advising on notification obligations and other legal risks, and representing clients before regulators around the world.
  • Advising clients on risks and potential liabilities in relation to corporate transactions, especially involving companies that process significant volumes of personal data (e.g., in the adtech, digital identity/anti-fraud, and social network sectors.)
  • Providing strategic advice and advocacy on a range of UK and EU technology law reform issues including data privacy, cybersecurity, ecommerce, eID and trust services, and software-related proposals.
  • Representing clients in connection with references to the Court of Justice of the EU.