Online Safety

On 24 April 2025, Ofcom published a statement on the protection of children online (“Statement”). The Statement includes Ofcom’s final Children’s Risk Assessment Guidance (“Guidance”). Publication of the Guidance triggers the deadline for service providers regulated by the Online Safety Act 2023 (“OSA”) to complete their first “children’s risk assessment” (“CRA”)—specifically, 24 July 2025.  The Statement also confirms that the draft Protection of Children Codes of Practice for user-to-user and search services (“Codes”) have been laid before Parliament. Subject to completion of the Parliamentary process, providers must comply with the OSA’s “safety duties protecting children” from 25 July 2025.

Who do the Codes and Guidance apply to?

The Codes and Guidance apply to providers of “user-to-user” and “search” services that are “likely to be accessed by children”, which is determined based on a test set out in the OSA. In-scope providers were required to have completed an assessment—known as a “children’s access assessment”— by 16 April 2025 to determine if their services satisfy this test.Continue Reading Ofcom publishes statement on the protection of children online

On November 8, 2024, the UK’s communications regulator, the Office of Communications (“Ofcom”) published an open letter to online service providers operating in the UK regarding the Online Safety Act (“OSA”) and generative AI (the “Open Letter”).  In the Open Letter, Ofcom reminds online service providers that generative AI tools, such as chatbots and search assistants may fall within the scope of regulated services under the OSA.  More recently, Ofcom also published several pieces of guidance (some of which are under consultation) that include further commentary on how the OSA applies to generative AI services.Continue Reading Ofcom Explains How the UK Online Safety Act Will Apply to Generative AI

The Commission and the European Board for Digital Services have announced the integration of the revised voluntary Code of conduct on countering illegal hate speech online + (“Code of Conduct+”) into the framework of the Digital Services Act (“DSA”). Article 45 of the DSA states that, where significant systemic risks emerge under Article 34(1) (concerning the obligation on very large online platforms (“VLOPs”) and very large online search engines (“VLOSEs”) to identify, analyse, and assess systemic risks), and concern several VLOPs or VLOSEs, the Commission may invite VLOPs and VLOSEs to participate in the drawing up of codes of conduct, including commitments to take risk mitigation measures and to report on those measures and their outcomes. The Code of Conduct+ was adopted in this context. VLOPs and VLOSEs’ adherence to the Code of Conduct+ may be considered as a risk mitigation measure under Article 35 DSA, but participation in and implementation of the Code of Conduct+ “should not in itself presume compliance with [the DSA]” (Recital 104).

The Code of Conduct+—which builds on the Commission’s original Code of Conduct on countering illegal hate speech online, published in 2016—seeks to strengthen how Signatories address content defined by EU and national laws as illegal hate speech. Adhering to the Code of Conduct+’s commitments will be part of the annual independent audit of VLOPs and VLOSEs required by the DSA (Art. 37(1)(b)), but smaller companies are free to sign up to the Code as well.Continue Reading Introduction of the Revised Code of Conduct+ and the Digital Services Act

On November 4, 2024, the European Commission (“Commission”) adopted the implementing regulation on transparency reporting under the Digital Services Act (“DSA”). The implementing regulation is intended to harmonise the format and reporting time periods of the transparency reports required by the DSA.

Transparency reporting is required under Articles 15, 24 and

Continue Reading European Commission Adopts Implementing Regulation on DSA Transparency Reporting Obligations

On April 3, 2024, the UK Information Commissioner’s Office (“ICO”) published its 2024-2025 Children’s code strategy (the “Strategy”), which sets out its priorities for protecting children’s personal information online. This builds on the Children’s code of practice (“Children’s Code”) which the ICO introduced in 2021 to ensure that all online services which process children’s data are designed in a manner that is safe for children.Continue Reading ICO sets outs 2024-2025 priorities to protect children online

A New Orleans magician recently made headlines for using artificial intelligence (AI) to  emulate President Biden’s voice without his consent in a misleading robocall to New Hampshire voters. This was not a magic trick, but rather a demonstration of the risks AI-generated “deepfakes” pose to election integrity.  As rapidly evolving AI capabilities collide with the ongoing 2024 elections, federal and state policymakers increasingly are taking steps to protect the public from the threat of deceptive AI-generated political content.

Media generated by AI to imitate an individual’s voice or likeness present significant challenges for regulators.  As deepfakes increasingly become indistinguishable from authentic content, members of Congress, federal regulatory agencies, and third-party stakeholders all have called for action to mitigate the threats deepfakes can pose for elections.  Continue Reading As States Lead Efforts to Address Deepfakes in Political Ads, Federal Lawmakers Seek Nationwide Policies

On 20 February, 2024, the Governments of the UK and Australia co-signed the UK-Australia Online Safety and Security Memorandum of Understanding (“MoU”). The MoU seeks to serve as a framework for the two countries to jointly deliver concrete and coordinated online safety and security policy initiatives and outcomes to support their citizens, businesses and economies.

The MoU comes shortly after the UK Information Commissioner’s Office (“ICO”) introduced its guidance on content moderation and data protection (see our previous blog here) to complement the UK’s Online Safety Act 2023, and the commencement of the Australian online safety codes, which complement the Australian Online Safety Act 2021.

The scope of the MoU is broad, covering a range of policy areas, including: harmful online behaviour; age assurance; safety by design; online platforms; child safety; technology-facilitated gender-based violence; safety technology; online media and digital literacy; user privacy and freedom of expression; online child sexual exploitation and abuse; terrorist and violent extremist content; lawful access to data; encryption; misinformation and disinformation; and the impact of new, emerging and rapidly evolving technologies such as artificial intelligence (“AI”).Continue Reading UK and Australia Agree Enhanced Cross-Border Cooperation in Online Safety and Security

On February 16, 2024, the UK Information Commissioner’s Office (ICO) introduced specific guidance on content moderation and data protection. The guidance complements the Online Safety Act (OSA)—the UK’s legislation designed to ensure digital platforms mitigate illegal and harmful content.  The ICO underlines that if an organisation carries out content moderation that involves personal information, “[it] must comply with data protection law.” The guidance highlights particular elements of data protection compliance that organisations should keep in mind, including in relation to establishing a legal basis and being transparent when moderating content, and complying with rules on automated decision-making. We summarize the key points below.Continue Reading ICO Releases Guidance on Content Moderation and Data Protection

On 26 October 2023, the UK’s Online Safety Bill received Royal Assent, becoming the Online Safety Act (“OSA”).  The OSA imposes various obligations on tech companies to prevent the uploading of, and rapidly remove, illegal user content—such as terrorist content, revenge pornography, and child sexual exploitation material—from their services, and also to take steps to reduce the risk that users will encounter such material (please see our previous blog post on the Online Safety Bill).Continue Reading UK Online Safety Bill Receives Royal Assent

On 9 October 2023, the European Parliament’s Internal Market and Consumer Protection Committee (IMCO) and Committee on Legal Affairs (JURI) agreed revised wording to amend the European Commission’s (the “EC”) proposed new Product Liability Directive (the “Directive”). The vote was passed with 33 votes in favour to 2 against. If adopted, the Directive will replace the existing (almost 40-year old) Directive 85/374/EEC on Liability for Defective Products, which imposes a form of strict liability on product manufacturers for harm caused by their defective products.Continue Reading EU Legislative Update on the New Product Liability Directive