Photo of Madelaine Harrington

Madelaine Harrington

Madelaine Harrington is an associate in the technology and media group. Her practice covers a wide range of regulatory and policy matters at the cross-section of privacy, content moderation, artificial intelligence, and free expression. Madelaine has deep experience with regulatory investigations, and has counseled multi-national companies on complex cross-jurisdictional fact-gathering exercises and responses to alleged non-compliance. She routinely counsels clients on compliance within the EU regulatory framework, including the General Data Protection Regulation (GDPR), among other EU laws and legislative proposals.

Madelaine’s representative matters include:

coordinating responses to investigations into the handling of personal information under the GDPR,
counseling major technology companies on the use of artificial intelligence, specifically facial recognition technology in public spaces,
advising a major technology company on the legality of hacking defense tactics,
advising a content company on compliance obligations under the DSA, including rules regarding recommender systems.

Madelaine’s work has previously involved representing U.S.-based clients on a wide range of First Amendment issues, including defamation lawsuits, access to courts, and FOIA. She maintains an active pro-bono practice representing journalists with various news-gathering needs.

The European Commission first published a proposal for an AI Liability Directive (“AILD”) in September 2022 as part of a broader set of initiatives, including proposals for a new Product Liability Directive (“new PLD”) and the EU AI Act (see our blog posts here, here and here).

The AILD was intended to introduce uniform rules for certain aspects of non-contractual civil claims relating to AI, by introducing disclosure requirements and rebuttable presumptions.

However, unlike the new PLD and EU AI Act, which have both been adopted and have entered into force, the AILD has encountered stagnation and resistance during the legislative process.Continue Reading The Future of the AI Liability Directive

On November 8, 2024, the UK’s communications regulator, the Office of Communications (“Ofcom”) published an open letter to online service providers operating in the UK regarding the Online Safety Act (“OSA”) and generative AI (the “Open Letter”).  In the Open Letter, Ofcom reminds online service providers that generative AI tools, such as chatbots and search assistants may fall within the scope of regulated services under the OSA.  More recently, Ofcom also published several pieces of guidance (some of which are under consultation) that include further commentary on how the OSA applies to generative AI services.Continue Reading Ofcom Explains How the UK Online Safety Act Will Apply to Generative AI

On February 20, 2025, the European Commission’s AI Office held a webinar explaining the AI literacy obligation under Article 4 of the EU’s AI Act.  This obligation started to apply on February 2, 2025.  At this webinar, the Commission highlighted the recently published repository of AI literacy practices.  This repository compiles the practices that some AI Pact companies have adopted to ensure a sufficient level of AI literacy in their workforce.  Continue Reading European Commission Provides Guidance on AI Literacy Requirement under the EU AI Act

The Commission and the European Board for Digital Services have announced the integration of the revised voluntary Code of conduct on countering illegal hate speech online + (“Code of Conduct+”) into the framework of the Digital Services Act (“DSA”). Article 45 of the DSA states that, where significant systemic risks emerge under Article 34(1) (concerning the obligation on very large online platforms (“VLOPs”) and very large online search engines (“VLOSEs”) to identify, analyse, and assess systemic risks), and concern several VLOPs or VLOSEs, the Commission may invite VLOPs and VLOSEs to participate in the drawing up of codes of conduct, including commitments to take risk mitigation measures and to report on those measures and their outcomes. The Code of Conduct+ was adopted in this context. VLOPs and VLOSEs’ adherence to the Code of Conduct+ may be considered as a risk mitigation measure under Article 35 DSA, but participation in and implementation of the Code of Conduct+ “should not in itself presume compliance with [the DSA]” (Recital 104).

The Code of Conduct+—which builds on the Commission’s original Code of Conduct on countering illegal hate speech online, published in 2016—seeks to strengthen how Signatories address content defined by EU and national laws as illegal hate speech. Adhering to the Code of Conduct+’s commitments will be part of the annual independent audit of VLOPs and VLOSEs required by the DSA (Art. 37(1)(b)), but smaller companies are free to sign up to the Code as well.Continue Reading Introduction of the Revised Code of Conduct+ and the Digital Services Act

On November 4, 2024, the European Commission (“Commission”) adopted the implementing regulation on transparency reporting under the Digital Services Act (“DSA”). The implementing regulation is intended to harmonise the format and reporting time periods of the transparency reports required by the DSA.

Transparency reporting is required under Articles 15, 24 and

Continue Reading European Commission Adopts Implementing Regulation on DSA Transparency Reporting Obligations

On 12 July 2024, EU lawmakers published the EU Artificial Intelligence Act (“AI Act”), a first-of-its-kind regulation aiming to harmonise rules on AI models and systems across the EU. The AI Act prohibits certain AI practices, and sets out regulations on “high-risk” AI systems, certain AI systems that pose transparency risks, and general-purpose AI (“GPAI”) models.

The AI Act’s regulations will take effect in different stages.  Rules regarding prohibited practices will apply as of 2 February 2025; obligations on GPAI models will apply as of 2 August 2025; and both transparency obligations and obligations on high-risk AI systems will apply as of 2 August 2026.  That said, there are exceptions for high-risk AI systems and GPAI models already placed on the market:  Continue Reading EU Artificial Intelligence Act Published

Earlier this week, Members of the European Parliament (MEPs) cast their votes in favor of the much-anticipated AI Act. With 523 votes in favor, 46 votes against, and 49 abstentions, the vote is a culmination of an effort that began in April 2021, when the EU Commission first published its proposal for the Act.

Here’s what lies ahead:Continue Reading EU Parliament Adopts AI Act

On February 16, 2024, the UK Information Commissioner’s Office (ICO) introduced specific guidance on content moderation and data protection. The guidance complements the Online Safety Act (OSA)—the UK’s legislation designed to ensure digital platforms mitigate illegal and harmful content.  The ICO underlines that if an organisation carries out content moderation that involves personal information, “[it] must comply with data protection law.” The guidance highlights particular elements of data protection compliance that organisations should keep in mind, including in relation to establishing a legal basis and being transparent when moderating content, and complying with rules on automated decision-making. We summarize the key points below.Continue Reading ICO Releases Guidance on Content Moderation and Data Protection

On February 13, 2024, the European Parliament’s Committee on Internal Market and Consumer Protection and its Committee on Civil Liberties, Justice and Home Affairs (the “Parliament Committees”) voted overwhelmingly to adopt the EU’s proposed AI Act. This follows a vote to approve the text earlier this month by the Council of Ministers’ Permanent Representatives Committee (“Coreper“). This brings the Act closer to final; the last step in the legislative process is a vote by the full European Parliament, currently scheduled to take place in April 2024.

The compromise text approved by Coreper and the Parliament Committees includes a number of significant changes as compared to earlier drafts. In this blog post, we set out some key takeaways.Continue Reading EU AI Act: Key Takeaways from the Compromise Text

In 2021, countries in EMEA continued to focus on the legal constructs around artificial intelligence (“AI”), and the momentum continues in 2022. The EU has been particularly active in AI—from its proposed horizontal AI regulation to recent enforcement and guidance—and will continue to be active going into 2022. Similarly, the UK follows closely behind with
Continue Reading EMEA AI Legislative and Regulatory Roundup 2021 and Forecast 2022