On February 16, 2024, the UK Information Commissioner’s Office (ICO) introduced specific guidance on content moderation and data protection. The guidance complements the Online Safety Act (OSA)—the UK’s legislation designed to ensure digital platforms mitigate illegal and harmful content.  The ICO underlines that if an organisation carries out content moderation that involves personal information, “[it] must comply with data protection law.” The guidance highlights particular elements of data protection compliance that organisations should keep in mind, including in relation to establishing a legal basis and being transparent when moderating content, and complying with rules on automated decision-making. We summarize the key points below.

According to the guidance, the processing of personal information for content moderation purposes must be:

  • Lawful. Organisations must identify a lawful basis before using personal information in their content moderation systems.  The guidance states the bases most likely to be relevant to content moderation are “legal obligation” (i.e., the need to comply with a statutory obligations such as those in the OSA), and legitimate interests.  Where content moderation involves special category data, the organisation must identify a condition for processing such information under Article 9 of the UK GDPR. Where the processing involves criminal offense information (and where the processing is not under the control of an official authority) the organisation must identify a condition for processing set out in Schedule 1 of the Data Protection Act 2018.
  • Fair.  Personal data processed in the context of content moderation must be conducted in a manner that people would reasonably expect, “perform accurately”, and “produce unbiased, consistent outputs.” The guidance recommends organisations conduct checks of their content moderation systems for accuracy and bias.
  • Transparent. Organisations must provide various information, including: why they are using personal information; the lawful basis for processing; the type of personal information used; the decisions taken with the information and how this may impact use of the service; whether the organisation is keeping the personal information and for how long; whether the information is shared with other organisations; and how users may exercise their data protection rights. The guidance notes that the OSA also requires providers of regulated user-to-user services to disclose information about proactive technology used for complying with illegal content safety duties or the safety duties for the protection of children.
  • Purpose-limited. Organisations must be clear about why they are using personal information for content moderation (for example, to comply with the OSA) and what they intend to do with the information.
  • Subject to data minimization principles. The guidance notes that content moderation is “highly contextual” and may require information beyond the content that is the subject of the moderation, such as previous posts or records of interaction with the service.  Whilst an organisation engaging in content moderation may take into account such information, organisations must be able to demonstrate that doing so is necessary to achieve their purpose, and that no less intrusive option is available.
  • Accurate. Organisations must “take all reasonable steps” to ensure personal information used and generated through content moderation is correct and not misleading.  Where an organisation determines a user has violated its policies, it must ensure any record on the user’s account is accurate and up-to-date.
  • Secure. Organisations must put in place technical and organisational measures to ensure an appropriate level of data security.  In the event that an organisation uses third-party moderators to engage in content moderation the organisation “must choose one that provides sufficient guarantees about its security measures.”

Organisations that use automated decision-making in content moderation must also determine whether that process involves “solely automated decisions that have legal or similarly significant effects on people” for the purposes of the Article 22 of the UK GDPR.  The guidance provides several examples specific to content moderation systems to aid organisations in making this determination.  Should the processing qualify as automated decision-making under Article 22, the guidance underscores such processing may only take place if it is authorised by domestic law, necessary for a contract, or based on a person’s explicit consent.

Finally, the guidance states that personal information obtained in the course of content moderation must not be kept for longer than necessary.  According to the guidance, this includes keeping information “‘just in case’ it might be relevant to the future”. The guidance recommends that organisations have clear contractual agreements with third parties limiting the information that they maintain to what is necessary.

*                      *                      *

The Covington team regularly advises on privacy and content-moderation laws in the UK and the EU. The nexus between data privacy and content moderation is an evolving area, and we will continue to monitor developments.  We are happy to answer any questions you may have on this topic.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Mark Young Mark Young

Mark Young, an experienced tech regulatory lawyer, advises major global companies on their most challenging data privacy compliance matters and investigations.

Mark also leads on EMEA cybersecurity matters at the firm. He advises on evolving cyber-related regulations, and helps clients respond to…

Mark Young, an experienced tech regulatory lawyer, advises major global companies on their most challenging data privacy compliance matters and investigations.

Mark also leads on EMEA cybersecurity matters at the firm. He advises on evolving cyber-related regulations, and helps clients respond to incidents, including personal data breaches, IP and trade secret theft, ransomware, insider threats, and state-sponsored attacks.

Mark has been recognized in Chambers UK for several years as “a trusted adviser – practical, results-oriented and an expert in the field;” “fast, thorough and responsive;” “extremely pragmatic in advice on risk;” and having “great insight into the regulators.”

Drawing on over 15 years of experience advising global companies on a variety of tech regulatory matters, Mark specializes in:

  • Advising on potential exposure under GDPR and international data privacy laws in relation to innovative products and services that involve cutting-edge technology (e.g., AI, biometric data, Internet-enabled devices, etc.).
  • Providing practical guidance on novel uses of personal data, responding to individuals exercising rights, and data transfers, including advising on Binding Corporate Rules (BCRs) and compliance challenges following Brexit and Schrems II.
    Helping clients respond to investigations by data protection regulators in the UK, EU and globally, and advising on potential follow-on litigation risks.
  • GDPR and international data privacy compliance for life sciences companies in relation to:
    clinical trials and pharmacovigilance;

    • digital health products and services; and
    • marketing programs.
    • International conflict of law issues relating to white collar investigations and data privacy compliance.
  • Cybersecurity issues, including:
    • best practices to protect business-critical information and comply with national and sector-specific regulation;
      preparing for and responding to cyber-based attacks and internal threats to networks and information, including training for board members;
    • supervising technical investigations; advising on PR, engagement with law enforcement and government agencies, notification obligations and other legal risks; and representing clients before regulators around the world; and
    • advising on emerging regulations, including during the legislative process.
  • Advising clients on risks and potential liabilities in relation to corporate transactions, especially involving companies that process significant volumes of personal data (e.g., in the adtech, digital identity/anti-fraud, and social network sectors.)
  • Providing strategic advice and advocacy on a range of EU technology law reform issues including data privacy, cybersecurity, ecommerce, eID and trust services, and software-related proposals.
  • Representing clients in connection with references to the Court of Justice of the EU.