European Commission

On 24 June 2025, the European Commission published its “roadmap” for ensuring lawful and effective access to data by law enforcement (“Roadmap”). The Roadmap forms a key part of the Commission’s internal security strategy, which was announced in April, and follows on from the November 2024 recommendations of the High-Level Group on Access to Data for Effective Law Enforcement.

Of most immediate relevance to electronic communications service (“ECS”) providers, the Commission intends to propose new data retention requirements, is considering changes to better enable cross-border live interception of communications, and will support the development of tools enabling law enforcement authorities (“LEAs”) to access encrypted data. We describe these proposals, and other elements of the Roadmap, in more detail below.Continue Reading European Commission publishes its plan to enable more effective law enforcement access to data

The European Commission has opened a consultation to gather feedback on forthcoming guidelines “on implementing the AI Act’s rules on high-risk AI systems”.  (For more on the definition of a high-risk AI system, see our blog post here.)  The consultation is open until July 18,  2025, following which the Commission will publish a summary of the consultation results through the AI Office.

For context, the AI Act contemplates two categories of “high-risk” AI systems:

  1. Products—or safety components of products—covered by the EU product safety legislation identified in Annex I, where the product or safety component is subject to a third-party conformity assessment (Art. 6(1)); and
  2. Certain systems that fall within eight categories of use cases identified in Annex III, namely, (1) biometrics; (2) critical infrastructure; (3) education and vocational training; (4) employment, workers’ management and access to self-employment; (5) access to and enjoyment of essential private services and essential public services and benefits; (6) law enforcement; (7) migration, asylum and border control management; and (8) administration of justice and democratic processes (Art. 6(2)). Only certain use cases within each category are considered high-risk—not the entire category itself. In addition, with one exception, the AI systems must be “intended to be used” for the particular use case, e.g., “AI systems intended to be used for emotion recognition”—a use case within biometrics (category one) (id., emphasis added).

Continue Reading The European Commission opens public consultation on high-risk AI systems

EU lawmakers are reportedly considering a delay in the enforcement of certain provisions of the EU Artificial Intelligence Act (AI Act). While the AI Act formally entered into force on 1 August 2024, its obligations apply on a rolling basis. Requirements related to AI literacy and the prohibition of specific AI practices have been applicable since 2 February 2025. Additional obligations are scheduled to come into effect on 2 August 2025 (general-purpose AI (GPAI) model obligations), 2 August 2026 (transparency obligations and obligations on Annex III high-risk AI systems), and 2 August 2027 (obligations on Annex I high-risk AI systems). The timeline and certainty of regulatory enforcement of these future obligations now appears uncertain.Continue Reading European Commission hints at delaying the AI Act

On 28 June 2025, the European Accessibility Act (“EAA”)—a 2019 directive—will begin applying to covered products and services.  The EAA imposes various obligations on technology and online service providers among others, requiring them to ensure that the products and services that they offer in the EU are made accessible to consumers with disabilities. According to its recitals, the goal of the EAA is to increase the availability of accessible products and services in the EU and improve the accessibility of information provided to consumers about those products and services.Continue Reading European Accessibility Act: June 2025 deadline has arrived

In a new post on the Inside Privacy blog, our colleagues discuss key consumer protection considerations for companies deploying AI chatbots in the EU market.

Continue Reading Digital Fairness Act Series: Topic 2 – Transparency and Disclosure Obligations for AI Chatbots in Consumer Interactions

In February 2025, the European Commission published two sets of guidelines to clarify key aspects of the EU Artificial Intelligence Act (“AI Act”): Guidelines on the definition of an AI system and Guidelines on prohibited AI practices. These guidelines are intended to provide guidance on the set of AI Act obligations that started to apply on February 2, 2025 – which includes the definitions section of the AI Act, obligations relating to AI literacy, and prohibitions on certain AI practices.

This article summarizes the key takeaways from the Commission’s guidelines on the definition of AI systems (the “Guidelines”). Please see our blogs on the guidelines on prohibited AI practices here, and our blog on AI literacy requirements under the AI Act here.Continue Reading European Commission Guidelines on the Definition of an “AI System”

In February 2025, the European Commission published two sets of guidelines to clarify key aspects of the EU Artificial Intelligence Act (“AI Act”): Guidelines on the definition of an AI system and Guidelines on prohibited AI practices. These guidelines are intended to provide guidance on the set of AI Act obligations that started to apply on February 2, 2025 – which includes the definitions section of the AI Act, obligations relating to AI literacy, and prohibitions on certain AI practices.

This article summarizes the key takeaways from the Commission’s guidelines on prohibited AI practices (“Guidelines”). Please see our blogs on the guidelines on the definition of AI systems here, and our blog on AI literacy requirements under the AI Act here.Continue Reading European Commission Guidelines on Prohibited AI Practices under the EU Artificial Intelligence Act

On May 7, 2025, the European Commission published a Q&A on the AI literacy obligation under Article 4 of the AI Act (the “Q&A”).  The Q&A builds upon the Commission’s guidance on AI literacy provided in its webinar in February 2025, covered in our earlier blog here.  Among other things, the Commission clarifies that the AI literacy obligation started to apply from February 2, 2025, but that the national market surveillance authorities tasked with supervising and enforcing the obligation will start doing so from August 3, 2026 onwards.Continue Reading European Commission Publishes Q&A on AI Literacy

The European Commission first published a proposal for an AI Liability Directive (“AILD”) in September 2022 as part of a broader set of initiatives, including proposals for a new Product Liability Directive (“new PLD”) and the EU AI Act (see our blog posts here, here and here).

The AILD was intended to introduce uniform rules for certain aspects of non-contractual civil claims relating to AI, by introducing disclosure requirements and rebuttable presumptions.

However, unlike the new PLD and EU AI Act, which have both been adopted and have entered into force, the AILD has encountered stagnation and resistance during the legislative process.Continue Reading The Future of the AI Liability Directive

The Commission and the European Board for Digital Services have announced the integration of the revised voluntary Code of conduct on countering illegal hate speech online + (“Code of Conduct+”) into the framework of the Digital Services Act (“DSA”). Article 45 of the DSA states that, where significant systemic risks emerge under Article 34(1) (concerning the obligation on very large online platforms (“VLOPs”) and very large online search engines (“VLOSEs”) to identify, analyse, and assess systemic risks), and concern several VLOPs or VLOSEs, the Commission may invite VLOPs and VLOSEs to participate in the drawing up of codes of conduct, including commitments to take risk mitigation measures and to report on those measures and their outcomes. The Code of Conduct+ was adopted in this context. VLOPs and VLOSEs’ adherence to the Code of Conduct+ may be considered as a risk mitigation measure under Article 35 DSA, but participation in and implementation of the Code of Conduct+ “should not in itself presume compliance with [the DSA]” (Recital 104).

The Code of Conduct+—which builds on the Commission’s original Code of Conduct on countering illegal hate speech online, published in 2016—seeks to strengthen how Signatories address content defined by EU and national laws as illegal hate speech. Adhering to the Code of Conduct+’s commitments will be part of the annual independent audit of VLOPs and VLOSEs required by the DSA (Art. 37(1)(b)), but smaller companies are free to sign up to the Code as well.Continue Reading Introduction of the Revised Code of Conduct+ and the Digital Services Act