On July 30, 2020, the UK Information Commissioner’s Office (“ICO”) published its final guidance on Artificial Intelligence (the “Guidance”).  The Guidance sets out a framework for auditing AI systems for compliance with data protection obligations under the GDPR and the UK Data Protection Act 2018.  The Guidance builds on the ICO’s earlier commitment to enable good data protection practice in AI, and on previous guidance and blogs issued on specific issues relating to AI (for example, on explaining decisions on AI, trade-offs, and bias and discrimination, all covered in Covington blogs).

The Guidance, which provides advice and recommendations on best practice in applying core GDPR principles to AI, will be of particular relevance to those that develop or integrate AI and/or machine-learning into their public-facing products and services.  The ICO suggests that organisations should adopt a risk-based approach when evaluating AI systems. The key takeaway is a familiar one: identification and mitigation of data protection risks at an early stage (i.e., the design stage) is likely to yield the best compliance results.

The Guidance has four parts, each dealing with the application of fundamental data protection principles to AI systems:

Part 1 – Accountability and Governance Implications

This section covers: (i) the use of data protection impact assessments (DPIAs) to identify and control the risks that AI systems may pose, (ii) understanding the relationship and distinction  between controllers and processors in the AI context, and (iii) managing competing interests when assessing AI-related risks (i.e., reconciling the use of sufficient AI training data with the principle of data minimisation).

The ICO’s recommendations include (among others):

  • Organisations should carry out DPIAs where appropriate. DPIAs are also a useful tool for documenting compliance with GDPR requirements, particularly those relating to accountability and “data protection by design”.
  • Organisations should ensure that the roles of the different parties in the AI supply chain are clearly mapped at the outset. Existing ICO guidance applies, and may help to identify controller/processor relationships. The AI Guidance also gives specific examples for stakeholders in the AI ecosystem.
  • If an AI system involves trade-offs between different risks, organisations should clearly document their assessments of competing interests to an auditable standard. Organisations should also document the methodology for identifying and assessing any trade-offs they have made.

Part 2 – Lawfulness, Fairness and Transparency

This section covers: (i) application of the lawfulness, fairness and transparency principles to AI systems, and (ii) how to identify appropriate purposes and legal bases in the AI context.

The ICO’s recommendations include (among others):

  • Organisations should clearly document (i) the source of any input data, (ii) whether the outputs of the AI system are “statistically informed guesses” as opposed to facts, and (iii) any inaccurate input data or statistical flaw in the AI system that might affect the quality of the output from the AI system.
  • Because the purposes and risks of processing associated with each phase often differ, organisations should consider separate legal bases for processing personal data at each stage of the AI development and deployment process. The Guidance also includes detailed recommendations for which legal bases should be used in certain situations.

Part 3 – Assessing Security and Data Minimisation

This section covers: (i) data security issues common to AI, (ii) types of privacy attacks to which AI systems are susceptible, and (iii) compliance with the principle of data minimisation.

The ICO’s recommendations include (among others):

  • Organisations should implement effective risk management practices, including by effectively tracking and managing training data, and ensuring “pipeline” security by separating the AI development environment from the rest of the organisation’s IT system.
  • Organisations should consider applying privacy-enhancing techniques (e.g., perturbation, federated learning, and the use of synthetic data) to training data to minimise the risk of tracing back to individuals.

Part 4 – Ensuring Data Subject Rights

This section covers: (i) fulfilling data subject rights in the context of data input and output of AI systems, and (ii) data subject rights in the context of automated decision-making.

The ICO’s recommendations include (among others):

  • Organisations should ensure that systems are in place to effectively respond to and comply with data subject rights requests. Organisations should avoid categorising data subject requests as “manifestly unfounded or excessive” simply because fulfilment of such requests is more challenging in the AI context.
  • Organisations should design AI systems to facilitate effective human review, and provide sufficient training to staff to ensure they can critically assess the outputs, and understand the limitations of, the AI system.

The ICO will continue to develop the Guidance, along with tools “that promote privacy by design to those developing and using AI”. This would appear to include a forthcoming “toolkit” to “provide further practical support to organisations auditing the compliance of their own AI systems”. The ICO encourages organisations to provide feedback on the Guidance to make sure that it remains relevant and consistent with emerging developments. In the Guidance, the ICO also indicates that it is planning separately to revise its Cloud Computing Guidance in 2021.

The Guidance comes a few weeks after the European Commission’s High-Level Expert Group on AI published its “Assessment List for Trustworthy Artificial Intelligence,” designed to help companies identify the risks of AI systems they develop, deploy or procure, as well as appropriate mitigation measures (the subject of a Covington blog available here).

The team at Covington will continue to monitor developments in this space.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Lisa Peets Lisa Peets

Lisa Peets is co-chair of the firm’s Technology and Communications Regulation Practice Group and a member of the firm’s global Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory compliance and investigations alongside legislative advocacy. In this…

Lisa Peets is co-chair of the firm’s Technology and Communications Regulation Practice Group and a member of the firm’s global Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory compliance and investigations alongside legislative advocacy. In this context, she has worked closely with many of the world’s best-known technology companies.

Lisa counsels clients on a range of EU and UK legal frameworks affecting technology providers, including data protection, content moderation, platform regulation, copyright, e-commerce and consumer protection, and the rapidly expanding universe of additional rules applicable to technology, data and online services. Lisa also routinely advises clients in and outside of the technology sector on trade related matters, including EU trade controls rules.

According to Chambers UK (2024 edition), “Lisa provides an excellent service and familiarity with client needs.”

Photo of Sam Jungyun Choi Sam Jungyun Choi

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such…

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such as AI, digital health, and autonomous vehicles.

Sam is an expert on the EU General Data Protection Regulation (GDPR) and the UK Data Protection Act, having advised on these laws since they started to apply. In recent years, her work has evolved to include advising companies on new data and digital laws in the EU, including the AI Act, Data Act and the Digital Services Act.

Sam’s practice includes advising on regulatory, compliance and policy issues that affect leading companies in the technology, life sciences and gaming companies on laws relating to privacy and data protection, digital services and AI. She advises clients on designing of new products and services, preparing privacy documentation, and developing data and AI governance programs. She also advises clients on matters relating to children’s privacy and policy initiatives relating to online safety.