Photo of Madelaine Harrington

Madelaine Harrington

Madelaine Harrington is an associate in the technology and media group. Her practice covers a wide range of regulatory and policy matters at the cross-section of privacy, content moderation, artificial intelligence, and free expression. Madelaine has deep experience with regulatory investigations, and has counseled multi-national companies on complex cross-jurisdictional fact-gathering exercises and responses to alleged non-compliance. She routinely counsels clients on compliance within the EU regulatory framework, including the General Data Protection Regulation (GDPR), among other EU laws and legislative proposals.

Madelaine’s representative matters include:

coordinating responses to investigations into the handling of personal information under the GDPR,
counseling major technology companies on the use of artificial intelligence, specifically facial recognition technology in public spaces,
advising a major technology company on the legality of hacking defense tactics,
advising a content company on compliance obligations under the DSA, including rules regarding recommender systems.

Madelaine’s work has previously involved representing U.S.-based clients on a wide range of First Amendment issues, including defamation lawsuits, access to courts, and FOIA. She maintains an active pro-bono practice representing journalists with various news-gathering needs.

Earlier this week, Members of the European Parliament (MEPs) cast their votes in favor of the much-anticipated AI Act. With 523 votes in favor, 46 votes against, and 49 abstentions, the vote is a culmination of an effort that began in April 2021, when the EU Commission first published its proposal for the Act.

Here’s what lies ahead:Continue Reading EU Parliament Adopts AI Act

On February 16, 2024, the UK Information Commissioner’s Office (ICO) introduced specific guidance on content moderation and data protection. The guidance complements the Online Safety Act (OSA)—the UK’s legislation designed to ensure digital platforms mitigate illegal and harmful content.  The ICO underlines that if an organisation carries out content moderation that involves personal information, “[it] must comply with data protection law.” The guidance highlights particular elements of data protection compliance that organisations should keep in mind, including in relation to establishing a legal basis and being transparent when moderating content, and complying with rules on automated decision-making. We summarize the key points below.Continue Reading ICO Releases Guidance on Content Moderation and Data Protection

On February 13, 2024, the European Parliament’s Committee on Internal Market and Consumer Protection and its Committee on Civil Liberties, Justice and Home Affairs (the “Parliament Committees”) voted overwhelmingly to adopt the EU’s proposed AI Act. This follows a vote to approve the text earlier this month by the Council of Ministers’ Permanent Representatives Committee (“Coreper“). This brings the Act closer to final; the last step in the legislative process is a vote by the full European Parliament, currently scheduled to take place in April 2024.

The compromise text approved by Coreper and the Parliament Committees includes a number of significant changes as compared to earlier drafts. In this blog post, we set out some key takeaways.Continue Reading EU AI Act: Key Takeaways from the Compromise Text

In 2021, countries in EMEA continued to focus on the legal constructs around artificial intelligence (“AI”), and the momentum continues in 2022. The EU has been particularly active in AI—from its proposed horizontal AI regulation to recent enforcement and guidance—and will continue to be active going into 2022. Similarly, the UK follows closely behind with
Continue Reading EMEA AI Legislative and Regulatory Roundup 2021 and Forecast 2022