Photo of Madelaine Harrington

Madelaine Harrington

Madelaine Harrington is an associate in the technology and media group. Her practice covers a wide range of regulatory and policy matters at the cross-section of privacy, content moderation, artificial intelligence, and free expression. Madelaine has deep experience with regulatory investigations, and has counseled multi-national companies on complex cross-jurisdictional fact-gathering exercises and responses to alleged non-compliance. She routinely counsels clients on compliance within the EU regulatory framework, including the General Data Protection Regulation (GDPR), among other EU laws and legislative proposals.

Madelaine’s representative matters include:

coordinating responses to investigations into the handling of personal information under the GDPR,
counseling major technology companies on the use of artificial intelligence, specifically facial recognition technology in public spaces,
advising a major technology company on the legality of hacking defense tactics,
advising a content company on compliance obligations under the DSA, including rules regarding recommender systems.

Madelaine’s work has previously involved representing U.S.-based clients on a wide range of First Amendment issues, including defamation lawsuits, access to courts, and FOIA. She maintains an active pro-bono practice representing journalists with various news-gathering needs.

On July 10, 2025, the AI Office published the final version of the Code of Practice for General-Purpose AI Models (the “Code”).  The Code is a voluntary compliance tool designed to help companies comply with the AI Act obligations for providers of general-purpose AI (“GPAI”) models.  The AI Office and the AI Board will now assess the Code and may approve it via an adequacy decision.  Once approved, the European Commission is expected to formally adopt the Code via an implementing act.

The Code details how providers of GPAI models may comply with their obligations under the AI Act.  It comprises three chapters, each covering different aspects of AI Act compliance: (i) transparency, (ii) copyright, and (iii) safety and security.  The first two chapters apply to all providers of GPAI models, while the third addresses obligations for providers of GPAI models with systemic risk.  By adhering to the Code, signatories agree to implement their AI practices in accordance with the commitments contained in the Code.Continue Reading AI Office Publishes Final Version of the Code of Practice for General-Purpose AI Models

On 14 July 2025, the European Commission published its final guidelines on the protection of minors under the Digital Services Act (“DSA”) (the “Guidelines”). The Guidelines are intended to provide guidance to providers of online platforms that are “accessible to minors” on meeting their obligations to “put in place appropriate and proportionate measures to ensure a high level of privacy, safety, and security of minors, on their service” (DSA, Art. 28(1)).

The European Commission published a draft version of the guidelines for consultation on 13 May 2025 (“Draft Guidelines”) (see our blog post here). The final Guidelines include some amendments to the Draft Guidelines on the basis of the feedback received during consultation, clarifying and building out further the recommended measures.

Although the Guidelines are non-binding, the Commission has made clear that it intends to use the Guidelines as a “significant and meaningful” benchmark when assessing in-scope providers’ compliance with Article 28(1) DSA.Continue Reading European Commission Makes New Announcements on the Protection of Minors Under the Digital Services Act

On June 5, 2025, the UK’s Information Commissioner’s Office (“ICO”) launched its new AI and biometrics strategy. The strategy aims to increase its scrutiny of AI and biometric technologies focusing on three priority situations, namely where: stakes are high; there is clear public concern for the technology; and regulatory clarity can provide immediate impact.

The ICO identified three areas of focus in its strategy:

  1. Transparency and explainability, i.e., when and how the technologies affect people;
  2. Bias and discrimination, particularly where the technologies have been trained on “flawed, incomplete or unrepresentative information”; and
  3. Rights and redress, i.e., making sure that systems are accurate, appropriate safeguards are in place to protect people’s rights, and that there are ways to challenge and correct outcomes that result in harm.

Continue Reading The ICO’s AI and biometrics strategy

The European Commission has opened a consultation to gather feedback on forthcoming guidelines “on implementing the AI Act’s rules on high-risk AI systems”.  (For more on the definition of a high-risk AI system, see our blog post here.)  The consultation is open until July 18,  2025, following which the Commission will publish a summary of the consultation results through the AI Office.

For context, the AI Act contemplates two categories of “high-risk” AI systems:

  1. Products—or safety components of products—covered by the EU product safety legislation identified in Annex I, where the product or safety component is subject to a third-party conformity assessment (Art. 6(1)); and
  2. Certain systems that fall within eight categories of use cases identified in Annex III, namely, (1) biometrics; (2) critical infrastructure; (3) education and vocational training; (4) employment, workers’ management and access to self-employment; (5) access to and enjoyment of essential private services and essential public services and benefits; (6) law enforcement; (7) migration, asylum and border control management; and (8) administration of justice and democratic processes (Art. 6(2)). Only certain use cases within each category are considered high-risk—not the entire category itself. In addition, with one exception, the AI systems must be “intended to be used” for the particular use case, e.g., “AI systems intended to be used for emotion recognition”—a use case within biometrics (category one) (id., emphasis added).

Continue Reading The European Commission opens public consultation on high-risk AI systems

EU lawmakers are reportedly considering a delay in the enforcement of certain provisions of the EU Artificial Intelligence Act (AI Act). While the AI Act formally entered into force on 1 August 2024, its obligations apply on a rolling basis. Requirements related to AI literacy and the prohibition of specific AI practices have been applicable since 2 February 2025. Additional obligations are scheduled to come into effect on 2 August 2025 (general-purpose AI (GPAI) model obligations), 2 August 2026 (transparency obligations and obligations on Annex III high-risk AI systems), and 2 August 2027 (obligations on Annex I high-risk AI systems). The timeline and certainty of regulatory enforcement of these future obligations now appears uncertain.Continue Reading European Commission hints at delaying the AI Act

In February 2025, the European Commission published two sets of guidelines to clarify key aspects of the EU Artificial Intelligence Act (“AI Act”): Guidelines on the definition of an AI system and Guidelines on prohibited AI practices. These guidelines are intended to provide guidance on the set of AI Act obligations that started to apply on February 2, 2025 – which includes the definitions section of the AI Act, obligations relating to AI literacy, and prohibitions on certain AI practices.

This article summarizes the key takeaways from the Commission’s guidelines on the definition of AI systems (the “Guidelines”). Please see our blogs on the guidelines on prohibited AI practices here, and our blog on AI literacy requirements under the AI Act here.Continue Reading European Commission Guidelines on the Definition of an “AI System”

In February 2025, the European Commission published two sets of guidelines to clarify key aspects of the EU Artificial Intelligence Act (“AI Act”): Guidelines on the definition of an AI system and Guidelines on prohibited AI practices. These guidelines are intended to provide guidance on the set of AI Act obligations that started to apply on February 2, 2025 – which includes the definitions section of the AI Act, obligations relating to AI literacy, and prohibitions on certain AI practices.

This article summarizes the key takeaways from the Commission’s guidelines on prohibited AI practices (“Guidelines”). Please see our blogs on the guidelines on the definition of AI systems here, and our blog on AI literacy requirements under the AI Act here.Continue Reading European Commission Guidelines on Prohibited AI Practices under the EU Artificial Intelligence Act

On April 3, 2025, the Budapest District Court made a request for a preliminary ruling to the Court of Justice of the European Union (“CJEU”) relating to the application of EU copyright rules to outputs generated by large language model (LLM)-based chatbots, specifically Google’s Gemini (formerly Bard), in response to a user prompt. This Case C-250/25 involves a dispute between Like Company, a Hungarian news publisher, and Google Ireland Ltd.Continue Reading CJEU Receives Questions on Copyright Rules Applying to AI Chatbot

The European Commission first published a proposal for an AI Liability Directive (“AILD”) in September 2022 as part of a broader set of initiatives, including proposals for a new Product Liability Directive (“new PLD”) and the EU AI Act (see our blog posts here, here and here).

The AILD was intended to introduce uniform rules for certain aspects of non-contractual civil claims relating to AI, by introducing disclosure requirements and rebuttable presumptions.

However, unlike the new PLD and EU AI Act, which have both been adopted and have entered into force, the AILD has encountered stagnation and resistance during the legislative process.Continue Reading The Future of the AI Liability Directive

On November 8, 2024, the UK’s communications regulator, the Office of Communications (“Ofcom”) published an open letter to online service providers operating in the UK regarding the Online Safety Act (“OSA”) and generative AI (the “Open Letter”).  In the Open Letter, Ofcom reminds online service providers that generative AI tools, such as chatbots and search assistants may fall within the scope of regulated services under the OSA.  More recently, Ofcom also published several pieces of guidance (some of which are under consultation) that include further commentary on how the OSA applies to generative AI services.Continue Reading Ofcom Explains How the UK Online Safety Act Will Apply to Generative AI