Photo of Dumitha Gunawardene

Dumitha Gunawardene

Dumitha Gunawardene is an associate in the Commercial Litigation Practice Group. His practice covers a broad range of complex commercial and contractual disputes and international commercial arbitrations. Dumitha has represented clients in the English High Court as well as in arbitrations under ICC, LCIA and DIAC Rules.

On August 27, 2025, the imageboard website 4chan Community Support LLC (“4chan”) and discussion forum Lolcow, LLC (dba “Kiwi Farms”) (together, the “Plaintiffs”)  filed a claim in the U.S. District Court of the District of Columbia (“Court”) asking the Court to declare, in effect, that the UK’s Online Safety Act 2023 (“OSA”) is unenforceable against the Plaintiffs. The claim was filed against Ofcom, the UK’s communications services regulator tasked with regulating and enforcing the OSA.

The Plaintiffs allege that the enforcement of the OSA against American companies is unconstitutional and that Ofcom’s actions to enforce the OSA are “intended to deliberately undermine the First Amendment and American competitiveness” (para. 113). As part of their claim, the Plaintiffs seek two permanent injunctions: one prohibiting Ofcom from enforcing the OSA against the Plaintiffs, and the other prohibiting Ofcom from issuing any further orders or demands to the Plaintiffs without “proper service” under the U.S.-UK Mutual Legal Assistance Treaty.Continue Reading 4chan and Kiwi Farms ask federal US court to declare unenforceability of the Online Safety Act

On July 10, 2025, the AI Office published the final version of the Code of Practice for General-Purpose AI Models (the “Code”).  The Code is a voluntary compliance tool designed to help companies comply with the AI Act obligations for providers of general-purpose AI (“GPAI”) models.  The AI Office and the AI Board will now assess the Code and may approve it via an adequacy decision.  Once approved, the European Commission is expected to formally adopt the Code via an implementing act.

The Code details how providers of GPAI models may comply with their obligations under the AI Act.  It comprises three chapters, each covering different aspects of AI Act compliance: (i) transparency, (ii) copyright, and (iii) safety and security.  The first two chapters apply to all providers of GPAI models, while the third addresses obligations for providers of GPAI models with systemic risk.  By adhering to the Code, signatories agree to implement their AI practices in accordance with the commitments contained in the Code.Continue Reading AI Office Publishes Final Version of the Code of Practice for General-Purpose AI Models

The European Commission has opened a consultation to gather feedback on forthcoming guidelines “on implementing the AI Act’s rules on high-risk AI systems”.  (For more on the definition of a high-risk AI system, see our blog post here.)  The consultation is open until July 18,  2025, following which the Commission will publish a summary of the consultation results through the AI Office.

For context, the AI Act contemplates two categories of “high-risk” AI systems:

  1. Products—or safety components of products—covered by the EU product safety legislation identified in Annex I, where the product or safety component is subject to a third-party conformity assessment (Art. 6(1)); and
  2. Certain systems that fall within eight categories of use cases identified in Annex III, namely, (1) biometrics; (2) critical infrastructure; (3) education and vocational training; (4) employment, workers’ management and access to self-employment; (5) access to and enjoyment of essential private services and essential public services and benefits; (6) law enforcement; (7) migration, asylum and border control management; and (8) administration of justice and democratic processes (Art. 6(2)). Only certain use cases within each category are considered high-risk—not the entire category itself. In addition, with one exception, the AI systems must be “intended to be used” for the particular use case, e.g., “AI systems intended to be used for emotion recognition”—a use case within biometrics (category one) (id., emphasis added).

Continue Reading The European Commission opens public consultation on high-risk AI systems

EU lawmakers are reportedly considering a delay in the enforcement of certain provisions of the EU Artificial Intelligence Act (AI Act). While the AI Act formally entered into force on 1 August 2024, its obligations apply on a rolling basis. Requirements related to AI literacy and the prohibition of specific AI practices have been applicable since 2 February 2025. Additional obligations are scheduled to come into effect on 2 August 2025 (general-purpose AI (GPAI) model obligations), 2 August 2026 (transparency obligations and obligations on Annex III high-risk AI systems), and 2 August 2027 (obligations on Annex I high-risk AI systems). The timeline and certainty of regulatory enforcement of these future obligations now appears uncertain.Continue Reading European Commission hints at delaying the AI Act