On February 16, 2024, the UK Information Commissioner’s Office (ICO) introduced specific guidance on content moderation and data protection. The guidance complements the Online Safety Act (OSA)—the UK’s legislation designed to ensure digital platforms mitigate illegal and harmful content.  The ICO underlines that if an organisation carries out content moderation that involves personal information, “[it] must comply with data protection law.” The guidance highlights particular elements of data protection compliance that organisations should keep in mind, including in relation to establishing a legal basis and being transparent when moderating content, and complying with rules on automated decision-making. We summarize the key points below.

According to the guidance, the processing of personal information for content moderation purposes must be:

  • Lawful. Organisations must identify a lawful basis before using personal information in their content moderation systems.  The guidance states the bases most likely to be relevant to content moderation are “legal obligation” (i.e., the need to comply with a statutory obligations such as those in the OSA), and legitimate interests.  Where content moderation involves special category data, the organisation must identify a condition for processing such information under Article 9 of the UK GDPR. Where the processing involves criminal offense information (and where the processing is not under the control of an official authority) the organisation must identify a condition for processing set out in Schedule 1 of the Data Protection Act 2018.
  • Fair.  Personal data processed in the context of content moderation must be conducted in a manner that people would reasonably expect, “perform accurately”, and “produce unbiased, consistent outputs.” The guidance recommends organisations conduct checks of their content moderation systems for accuracy and bias.
  • Transparent. Organisations must provide various information, including: why they are using personal information; the lawful basis for processing; the type of personal information used; the decisions taken with the information and how this may impact use of the service; whether the organisation is keeping the personal information and for how long; whether the information is shared with other organisations; and how users may exercise their data protection rights. The guidance notes that the OSA also requires providers of regulated user-to-user services to disclose information about proactive technology used for complying with illegal content safety duties or the safety duties for the protection of children.
  • Purpose-limited. Organisations must be clear about why they are using personal information for content moderation (for example, to comply with the OSA) and what they intend to do with the information.
  • Subject to data minimization principles. The guidance notes that content moderation is “highly contextual” and may require information beyond the content that is the subject of the moderation, such as previous posts or records of interaction with the service.  Whilst an organisation engaging in content moderation may take into account such information, organisations must be able to demonstrate that doing so is necessary to achieve their purpose, and that no less intrusive option is available.
  • Accurate. Organisations must “take all reasonable steps” to ensure personal information used and generated through content moderation is correct and not misleading.  Where an organisation determines a user has violated its policies, it must ensure any record on the user’s account is accurate and up-to-date.
  • Secure. Organisations must put in place technical and organisational measures to ensure an appropriate level of data security.  In the event that an organisation uses third-party moderators to engage in content moderation the organisation “must choose one that provides sufficient guarantees about its security measures.”

Organisations that use automated decision-making in content moderation must also determine whether that process involves “solely automated decisions that have legal or similarly significant effects on people” for the purposes of the Article 22 of the UK GDPR.  The guidance provides several examples specific to content moderation systems to aid organisations in making this determination.  Should the processing qualify as automated decision-making under Article 22, the guidance underscores such processing may only take place if it is authorised by domestic law, necessary for a contract, or based on a person’s explicit consent.

Finally, the guidance states that personal information obtained in the course of content moderation must not be kept for longer than necessary.  According to the guidance, this includes keeping information “‘just in case’ it might be relevant to the future”. The guidance recommends that organisations have clear contractual agreements with third parties limiting the information that they maintain to what is necessary.

*                      *                      *

The Covington team regularly advises on privacy and content-moderation laws in the UK and the EU. The nexus between data privacy and content moderation is an evolving area, and we will continue to monitor developments.  We are happy to answer any questions you may have on this topic.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Mark Young Mark Young

Mark Young is an experienced tech regulatory lawyer and a vice-chair of Covington’s Data Privacy and Cybersecurity Practice Group. He advises major global companies on their most challenging data privacy compliance matters and investigations. Mark also leads on EMEA cybersecurity matters at the…

Mark Young is an experienced tech regulatory lawyer and a vice-chair of Covington’s Data Privacy and Cybersecurity Practice Group. He advises major global companies on their most challenging data privacy compliance matters and investigations. Mark also leads on EMEA cybersecurity matters at the firm. In these contexts, he has worked closely with some of the world’s leading technology and life sciences companies and other multinationals.

Mark has been recognized for several years in Chambers UK as “a trusted adviser – practical, results-oriented and an expert in the field;” “fast, thorough and responsive;” “extremely pragmatic in advice on risk;” “provides thoughtful, strategic guidance and is a pleasure to work with;” and has “great insight into the regulators.” According to the most recent edition (2024), “He’s extremely technologically sophisticated and advises on true issues of first impression, particularly in the field of AI.”

Drawing on over 15 years of experience, Mark specializes in:

  • Advising on potential exposure under GDPR and international data privacy laws in relation to innovative products and services that involve cutting-edge technology, e.g., AI, biometric data, and connected devices.
  • Providing practical guidance on novel uses of personal data, responding to individuals exercising rights, and data transfers, including advising on Binding Corporate Rules (BCRs) and compliance challenges following Brexit and Schrems II.
  • Helping clients respond to investigations by data protection regulators in the UK, EU and globally, and advising on potential follow-on litigation risks.
  • Counseling ad networks (demand and supply side), retailers, and other adtech companies on data privacy compliance relating to programmatic advertising, and providing strategic advice on complaints and claims in a range of jurisdictions.
  • Advising life sciences companies on industry-specific data privacy issues, including:
    • clinical trials and pharmacovigilance;
    • digital health products and services; and
    • engagement with healthcare professionals and marketing programs.
  • International conflict of law issues relating to white collar investigations and data privacy compliance (collecting data from employees and others, international transfers, etc.).
  • Advising various clients on the EU NIS2 Directive and UK NIS regulations and other cybersecurity-related regulations, particularly (i) cloud computing service providers, online marketplaces, social media networks, and other digital infrastructure and service providers, and (ii) medical device and pharma companies, and other manufacturers.
  • Helping a broad range of organizations prepare for and respond to cybersecurity incidents, including personal data breaches, IP and trade secret theft, ransomware, insider threats, supply chain incidents, and state-sponsored attacks. Mark’s incident response expertise includes:
    • supervising technical investigations and providing updates to company boards and leaders;
    • advising on PR and related legal risks following an incident;
    • engaging with law enforcement and government agencies; and
    • advising on notification obligations and other legal risks, and representing clients before regulators around the world.
  • Advising clients on risks and potential liabilities in relation to corporate transactions, especially involving companies that process significant volumes of personal data (e.g., in the adtech, digital identity/anti-fraud, and social network sectors.)
  • Providing strategic advice and advocacy on a range of UK and EU technology law reform issues including data privacy, cybersecurity, ecommerce, eID and trust services, and software-related proposals.
  • Representing clients in connection with references to the Court of Justice of the EU.
Photo of Madelaine Harrington Madelaine Harrington

Madelaine Harrington is an associate in the technology and media group. Her practice covers a wide range of regulatory and policy matters at the cross-section of privacy, content moderation, artificial intelligence, and free expression. Madelaine has deep experience with regulatory investigations, and has…

Madelaine Harrington is an associate in the technology and media group. Her practice covers a wide range of regulatory and policy matters at the cross-section of privacy, content moderation, artificial intelligence, and free expression. Madelaine has deep experience with regulatory investigations, and has counseled multi-national companies on complex cross-jurisdictional fact-gathering exercises and responses to alleged non-compliance. She routinely counsels clients on compliance within the EU regulatory framework, including the General Data Protection Regulation (GDPR), among other EU laws and legislative proposals.

Madelaine’s representative matters include:

  • coordinating responses to investigations into the handling of personal information under the GDPR,
  • counseling major technology companies on the use of artificial intelligence, specifically facial recognition technology in public spaces,
  • advising a major technology company on the legality of hacking defense tactics,
  • advising a content company on compliance obligations under the DSA, including rules regarding recommender systems.

Madelaine’s work has previously involved representing U.S.-based clients on a wide range of First Amendment issues, including defamation lawsuits, access to courts, and FOIA. She maintains an active pro-bono practice representing journalists with various news-gathering needs.