Online Safety

On August 27, 2025, the imageboard website 4chan Community Support LLC (“4chan”) and discussion forum Lolcow, LLC (dba “Kiwi Farms”) (together, the “Plaintiffs”)  filed a claim in the U.S. District Court of the District of Columbia (“Court”) asking the Court to declare, in effect, that the UK’s Online Safety Act 2023 (“OSA”) is unenforceable against the Plaintiffs. The claim was filed against Ofcom, the UK’s communications services regulator tasked with regulating and enforcing the OSA.

The Plaintiffs allege that the enforcement of the OSA against American companies is unconstitutional and that Ofcom’s actions to enforce the OSA are “intended to deliberately undermine the First Amendment and American competitiveness” (para. 113). As part of their claim, the Plaintiffs seek two permanent injunctions: one prohibiting Ofcom from enforcing the OSA against the Plaintiffs, and the other prohibiting Ofcom from issuing any further orders or demands to the Plaintiffs without “proper service” under the U.S.-UK Mutual Legal Assistance Treaty.Continue Reading 4chan and Kiwi Farms ask federal US court to declare unenforceability of the Online Safety Act

On 14 July 2025, the European Commission published its final guidelines on the protection of minors under the Digital Services Act (“DSA”) (the “Guidelines”). The Guidelines are intended to provide guidance to providers of online platforms that are “accessible to minors” on meeting their obligations to “put in place appropriate and proportionate measures to ensure a high level of privacy, safety, and security of minors, on their service” (DSA, Art. 28(1)).

The European Commission published a draft version of the guidelines for consultation on 13 May 2025 (“Draft Guidelines”) (see our blog post here). The final Guidelines include some amendments to the Draft Guidelines on the basis of the feedback received during consultation, clarifying and building out further the recommended measures.

Although the Guidelines are non-binding, the Commission has made clear that it intends to use the Guidelines as a “significant and meaningful” benchmark when assessing in-scope providers’ compliance with Article 28(1) DSA.Continue Reading European Commission Makes New Announcements on the Protection of Minors Under the Digital Services Act

Ofcom announced on 9 July 2025 that it has contacted certain providers of “user-to-user” and “search” services that are “likely to be accessed by children”, requesting that they submit records of their children’s risk assessments (“CRA”) by 7 August 2025 or face enforcement action.

As noted in our previous blogpost here, in-scope providers have until 24 July 2025 to complete their first CRA—meaning that Ofcom is initiating enforcement on risk assessments early.Continue Reading Ofcom launches early enforcement of children’s risk assessment duties  

On 24 April 2025, Ofcom published a statement on the protection of children online (“Statement”). The Statement includes Ofcom’s final Children’s Risk Assessment Guidance (“Guidance”). Publication of the Guidance triggers the deadline for service providers regulated by the Online Safety Act 2023 (“OSA”) to complete their first “children’s risk assessment” (“CRA”)—specifically, 24 July 2025.  The Statement also confirms that the draft Protection of Children Codes of Practice for user-to-user and search services (“Codes”) have been laid before Parliament. Subject to completion of the Parliamentary process, providers must comply with the OSA’s “safety duties protecting children” from 25 July 2025.

Who do the Codes and Guidance apply to?

The Codes and Guidance apply to providers of “user-to-user” and “search” services that are “likely to be accessed by children”, which is determined based on a test set out in the OSA. In-scope providers were required to have completed an assessment—known as a “children’s access assessment”— by 16 April 2025 to determine if their services satisfy this test.Continue Reading Ofcom publishes statement on the protection of children online

On November 8, 2024, the UK’s communications regulator, the Office of Communications (“Ofcom”) published an open letter to online service providers operating in the UK regarding the Online Safety Act (“OSA”) and generative AI (the “Open Letter”).  In the Open Letter, Ofcom reminds online service providers that generative AI tools, such as chatbots and search assistants may fall within the scope of regulated services under the OSA.  More recently, Ofcom also published several pieces of guidance (some of which are under consultation) that include further commentary on how the OSA applies to generative AI services.Continue Reading Ofcom Explains How the UK Online Safety Act Will Apply to Generative AI

The Commission and the European Board for Digital Services have announced the integration of the revised voluntary Code of conduct on countering illegal hate speech online + (“Code of Conduct+”) into the framework of the Digital Services Act (“DSA”). Article 45 of the DSA states that, where significant systemic risks emerge under Article 34(1) (concerning the obligation on very large online platforms (“VLOPs”) and very large online search engines (“VLOSEs”) to identify, analyse, and assess systemic risks), and concern several VLOPs or VLOSEs, the Commission may invite VLOPs and VLOSEs to participate in the drawing up of codes of conduct, including commitments to take risk mitigation measures and to report on those measures and their outcomes. The Code of Conduct+ was adopted in this context. VLOPs and VLOSEs’ adherence to the Code of Conduct+ may be considered as a risk mitigation measure under Article 35 DSA, but participation in and implementation of the Code of Conduct+ “should not in itself presume compliance with [the DSA]” (Recital 104).

The Code of Conduct+—which builds on the Commission’s original Code of Conduct on countering illegal hate speech online, published in 2016—seeks to strengthen how Signatories address content defined by EU and national laws as illegal hate speech. Adhering to the Code of Conduct+’s commitments will be part of the annual independent audit of VLOPs and VLOSEs required by the DSA (Art. 37(1)(b)), but smaller companies are free to sign up to the Code as well.Continue Reading Introduction of the Revised Code of Conduct+ and the Digital Services Act

On November 4, 2024, the European Commission (“Commission”) adopted the implementing regulation on transparency reporting under the Digital Services Act (“DSA”). The implementing regulation is intended to harmonise the format and reporting time periods of the transparency reports required by the DSA.

Transparency reporting is required under Articles 15, 24 and

Continue Reading European Commission Adopts Implementing Regulation on DSA Transparency Reporting Obligations

On April 3, 2024, the UK Information Commissioner’s Office (“ICO”) published its 2024-2025 Children’s code strategy (the “Strategy”), which sets out its priorities for protecting children’s personal information online. This builds on the Children’s code of practice (“Children’s Code”) which the ICO introduced in 2021 to ensure that all online services which process children’s data are designed in a manner that is safe for children.Continue Reading ICO sets outs 2024-2025 priorities to protect children online

A New Orleans magician recently made headlines for using artificial intelligence (AI) to  emulate President Biden’s voice without his consent in a misleading robocall to New Hampshire voters. This was not a magic trick, but rather a demonstration of the risks AI-generated “deepfakes” pose to election integrity.  As rapidly evolving AI capabilities collide with the ongoing 2024 elections, federal and state policymakers increasingly are taking steps to protect the public from the threat of deceptive AI-generated political content.

Media generated by AI to imitate an individual’s voice or likeness present significant challenges for regulators.  As deepfakes increasingly become indistinguishable from authentic content, members of Congress, federal regulatory agencies, and third-party stakeholders all have called for action to mitigate the threats deepfakes can pose for elections.  Continue Reading As States Lead Efforts to Address Deepfakes in Political Ads, Federal Lawmakers Seek Nationwide Policies

On 20 February, 2024, the Governments of the UK and Australia co-signed the UK-Australia Online Safety and Security Memorandum of Understanding (“MoU”). The MoU seeks to serve as a framework for the two countries to jointly deliver concrete and coordinated online safety and security policy initiatives and outcomes to support their citizens, businesses and economies.

The MoU comes shortly after the UK Information Commissioner’s Office (“ICO”) introduced its guidance on content moderation and data protection (see our previous blog here) to complement the UK’s Online Safety Act 2023, and the commencement of the Australian online safety codes, which complement the Australian Online Safety Act 2021.

The scope of the MoU is broad, covering a range of policy areas, including: harmful online behaviour; age assurance; safety by design; online platforms; child safety; technology-facilitated gender-based violence; safety technology; online media and digital literacy; user privacy and freedom of expression; online child sexual exploitation and abuse; terrorist and violent extremist content; lawful access to data; encryption; misinformation and disinformation; and the impact of new, emerging and rapidly evolving technologies such as artificial intelligence (“AI”).Continue Reading UK and Australia Agree Enhanced Cross-Border Cooperation in Online Safety and Security