Photo of Stacy Young

Stacy Young

Stacy Young is an associate in the London office. She advises technology and life sciences companies across a range of privacy and regulatory issues spanning AI, clinical trials, data protection and cybersecurity.

On August 27, 2025, the imageboard website 4chan Community Support LLC (“4chan”) and discussion forum Lolcow, LLC (dba “Kiwi Farms”) (together, the “Plaintiffs”)  filed a claim in the U.S. District Court of the District of Columbia (“Court”) asking the Court to declare, in effect, that the UK’s Online Safety Act 2023 (“OSA”) is unenforceable against the Plaintiffs. The claim was filed against Ofcom, the UK’s communications services regulator tasked with regulating and enforcing the OSA.

The Plaintiffs allege that the enforcement of the OSA against American companies is unconstitutional and that Ofcom’s actions to enforce the OSA are “intended to deliberately undermine the First Amendment and American competitiveness” (para. 113). As part of their claim, the Plaintiffs seek two permanent injunctions: one prohibiting Ofcom from enforcing the OSA against the Plaintiffs, and the other prohibiting Ofcom from issuing any further orders or demands to the Plaintiffs without “proper service” under the U.S.-UK Mutual Legal Assistance Treaty.Continue Reading 4chan and Kiwi Farms ask federal US court to declare unenforceability of the Online Safety Act

On July 10, 2025, the AI Office published the final version of the Code of Practice for General-Purpose AI Models (the “Code”).  The Code is a voluntary compliance tool designed to help companies comply with the AI Act obligations for providers of general-purpose AI (“GPAI”) models.  The AI Office and the AI Board will now assess the Code and may approve it via an adequacy decision.  Once approved, the European Commission is expected to formally adopt the Code via an implementing act.

The Code details how providers of GPAI models may comply with their obligations under the AI Act.  It comprises three chapters, each covering different aspects of AI Act compliance: (i) transparency, (ii) copyright, and (iii) safety and security.  The first two chapters apply to all providers of GPAI models, while the third addresses obligations for providers of GPAI models with systemic risk.  By adhering to the Code, signatories agree to implement their AI practices in accordance with the commitments contained in the Code.Continue Reading AI Office Publishes Final Version of the Code of Practice for General-Purpose AI Models

On June 5, 2025, the UK’s Information Commissioner’s Office (“ICO”) launched its new AI and biometrics strategy. The strategy aims to increase its scrutiny of AI and biometric technologies focusing on three priority situations, namely where: stakes are high; there is clear public concern for the technology; and regulatory clarity can provide immediate impact.

The ICO identified three areas of focus in its strategy:

  1. Transparency and explainability, i.e., when and how the technologies affect people;
  2. Bias and discrimination, particularly where the technologies have been trained on “flawed, incomplete or unrepresentative information”; and
  3. Rights and redress, i.e., making sure that systems are accurate, appropriate safeguards are in place to protect people’s rights, and that there are ways to challenge and correct outcomes that result in harm.

Continue Reading The ICO’s AI and biometrics strategy

On June 2, 2025, the Global Cross-Border Privacy Rules (“CBPR”) Forum officially launched the Global CBPR and Privacy Recognition for Processors (“PRP”) certifications.  Building on the existing Asia-Pacific Economic Cooperation (“APEC”) CBPR framework, the Global CBPR and PRP systems aim to extend privacy certifications beyond the APEC region.  They will allow controllers and processors to voluntarily undergo certification for their privacy and data governance measures under a framework that is recognized by many data protection authorities around the world.  The Global CBPR and PRP certifications are also expected to be recognized in multiple jurisdictions as a legitimizing mechanism for cross-border data transfers.Continue Reading Global CBPR and PRP Certifications Launched: A New International Data Transfer Mechanism

On November 6, 2024, the UK Information Commissioner’s Office (ICO) released its AI Tools in recruitment audit outcomes report (“Report”). This Report documents the ICO’s findings from a series of consensual audit engagements conducted with AI tool developers and providers. The goal of this process was to assess compliance with data protection law, identify any risks or room for improvement, and provide recommendations for AI providers and recruiters. The audits ran across sourcing, screening, and selection processes in recruitment, but did not include AI tools used to process biometric data, or generative AI. This work follows the publication of the Responsible AI in Recruitment guide by the Department for Science, Innovation, and Technology (DSIT) in March 2024.Continue Reading ICO Audit on AI Recruitment Tools

On July 30, 2024, the European Commission announced the launch of a consultation on trustworthy general-purpose artificial intelligence (“GPAI”) models and an invitation to stakeholders to express their interest in participating in the drawing up of the first GPAI Code of Practice (the “Code”) under the newly passed EU AI Act (see our previous blog here). Once finalized, GPAI model providers will be able to voluntarily rely on the Code to demonstrate their compliance with certain obligations in the AI Act.Continue Reading European Commission Launches Consultation and Call for Expression of Interest on GPAI Code of Practice

Last month, the European Commission published a draft Implementing Regulation (“IR”) under the EU’s revised Network and Information Systems Directive (“NIS2”). The draft IR applies to entities in the digital infrastructure sector, ICT service management and digital service providers (e.g., cloud computing providers, online marketplaces, and online social networks). It sets out further detail on (i) the specific cybersecurity risk-management measures those entities must implement; and (ii) when an incident affecting those entities is considered to be “significant”. Once finalized, it will apply from October 18, 2024.

Many companies may be taken aback by the granular nature of some of the technical measures listed and the criteria to determine if an incident is significant and reportable – especially coming so close to the October deadline for Member States to start applying their national transpositions of NIS2.

The IR is open for feedback via the Commission’s Have Your Say portal until July 25.Continue Reading NIS2: Commission Publishes Long-Awaited Draft Implementing Regulation On Technical And Methodological Requirements And Significant Incidents

On April 3, 2024, the UK Information Commissioner’s Office (“ICO”) published its 2024-2025 Children’s code strategy (the “Strategy”), which sets out its priorities for protecting children’s personal information online. This builds on the Children’s code of practice (“Children’s Code”) which the ICO introduced in 2021 to ensure that all online services which process children’s data are designed in a manner that is safe for children.Continue Reading ICO sets outs 2024-2025 priorities to protect children online

On December 5, 2023, the Spanish presidency of the Council of the EU issued a declaration to strengthen collaboration with Member States and the European Commission to develop a leading quantum technology ecosystem in Europe.

The declaration acknowledges the revolutionary potential of quantum computing, which uses quantum mechanics principles and quantum bits known as “qubits” to solve complex mathematical problems exponentially faster than classical computers.

The declaration was launched with eight Member State signatories (Denmark, Finland, Germany, Greece, Hungary, Italy, Slovenia, and Sweden), and invites other Member States to sign. By doing so, they agree to recognize the “strategic importance of quantum technologies for the scientific and industrial competitiveness of the EU” and commit to collaborating to make Europe the “’quantum valley’ of the world, the leading region globally for quantum excellence and innovation.Continue Reading Quantum Computing: Action in the EU and Potential Impacts

On July 7, 2023, the UK House of Lords’ Communications and Digital Committee (the “Committee”) announced an inquiry into Large Language Models (“LLMs”), a type of generative AI used for a wide range of purposes, including producing text, code and translations.  According to the Committee, they have launched the inquiry to understand “what needs to happen over the next 1–3 years to ensure the UK can respond to the opportunities and risks posed by large language models.

This inquiry is the first UK Parliament initiative to evaluate the UK Government’s “pro-innovation” approach to AI regulation, which empowers regulators to oversee AI within their respective sectors (as discussed in our blog here).  UK regulators have begun implementing the approach already.  For, example, the Information Commissioner’s Office has recently issued guidance on AI and data protection and generative AI tools that process personal data (see our blogs here and here for more details). Continue Reading UK House of Lords Announces Inquiry into Large Language Models