Photo of Dan Cooper

Dan Cooper

Daniel Cooper is co-chair of Covington’s Data Privacy and Cyber Security Practice, and advises clients on information technology regulatory and policy issues, particularly data protection, consumer protection, AI, and data security matters. He has over 20 years of experience in the field, representing clients in regulatory proceedings before privacy authorities in Europe and counseling them on their global compliance and government affairs strategies. Dan regularly lectures on the topic, and was instrumental in drafting the privacy standards applied in professional sport.

According to Chambers UK, his "level of expertise is second to none, but it's also equally paired with a keen understanding of our business and direction." It was noted that "he is very good at calibrating and helping to gauge risk."

Dan is qualified to practice law in the United States, the United Kingdom, Ireland and Belgium. He has also been appointed to the advisory and expert boards of privacy NGOs and agencies, such as the IAPP's European Advisory Board, Privacy International and the European security agency, ENISA.

On 8 October 2025, the European Commission published its Apply AI Strategy (the “Strategy”), a comprehensive policy framework aimed at accelerating the adoption and integration of artificial intelligence (“AI”) across strategic industrial sectors and the public sector in the EU.

The Strategy is structured around three pillars: (1) introducing sectoral flagships to boost AI use in key industrial sectors; (2) addressing cross-cutting challenges; and (3) establishing a single governance mechanism to provide sectoral stakeholders a way to participate in AI policymaking.

The Apply AI Strategy is accompanied by the AI in Science Strategy, and it will be complemented by the Data Union Strategy (which is anticipated later this year).Continue Reading European Commission Publishes Apply AI Strategy to Accelerate Sectoral AI Adoption Across the EU

On September 23, 2025, the Italian law on artificial intelligence (hereinafter, “Italian AI Law”) was signed into law, after receiving final approval by the Italian Senate on September 17, 2025. 

The law consists of varied provisions, including general principles and targeted sectoral rules in certain areas not covered by the EU AI Act.  The Italian AI Law will enter into force on October 10, 2025.

We provide below an overview of key aspects of the final text of the Italian AI Law.  For full detail, please see our previous blogpost here.Continue Reading Italy Adopts Artificial Intelligence Law

On June 26, 2025, the European Parliament’s Committee on Employment and Social Affairs published a draft report (“Draft Report”) recommending that the Commission initiate the legislative process for an EU Directive on algorithmic management in the workplace.  The Draft Report defines algorithmic management as the use of automated systemsincluding those involving artificial intelligenceto monitor, assess, or make decisions affecting workers and solo self-employed persons.

This Draft Report follows a Commission study published in March 2025 (“Commission Study”), which found that while existing EU legislation, such as the GDPR, addresses some risks to workers from algorithmic management, others remain.  The Commission Study also recognizes that the AI Act does not establish specific rights for workers in the context of AI use, which is noted as a concern.

The Draft Report encloses the proposed text for a new Directive on algorithmic management in the workplace (“Proposed Directive”).  The Draft Report has not yet been endorsed by the European Parliament.Continue Reading European Parliament Committee Recommends Commission to Propose EU Directive on Algorithmic Management

On 14 July 2025, the European Commission published its final guidelines on the protection of minors under the Digital Services Act (“DSA”) (the “Guidelines”). The Guidelines are intended to provide guidance to providers of online platforms that are “accessible to minors” on meeting their obligations to “put in place appropriate and proportionate measures to ensure a high level of privacy, safety, and security of minors, on their service” (DSA, Art. 28(1)).

The European Commission published a draft version of the guidelines for consultation on 13 May 2025 (“Draft Guidelines”) (see our blog post here). The final Guidelines include some amendments to the Draft Guidelines on the basis of the feedback received during consultation, clarifying and building out further the recommended measures.

Although the Guidelines are non-binding, the Commission has made clear that it intends to use the Guidelines as a “significant and meaningful” benchmark when assessing in-scope providers’ compliance with Article 28(1) DSA.Continue Reading European Commission Makes New Announcements on the Protection of Minors Under the Digital Services Act

There is an ongoing debate in Brussels about the circumstances under which AI-based safety components integrated into radio equipment are subject to the requirements for high-risk AI systems of the EU Artificial Intelligence Act 2024/1689 (the “AI Act”). The debate is particularly relevant because, if AI-based safety components are considered high-risk under the AI Act, they will be subject to a comprehensive set of regulatory requirements under the AI Act as of August 2, 2027. These requirements include risk management, data quality measures, transparency towards users, human oversight, as well as obligations relating to accuracy, robustness, and cybersecurity.

The discussion affects devices like smartphones with AI-driven emergency call features, smart home safety systems, smart home appliances and drones using AI for obstacle avoidance and emergency landing. In effect, many, if not all, of the AI-based safety components of internet-connected radio equipment could be subject to the AI Act’s requirements for high-risk AI systems.

Below we briefly outline the framework of the current debate.Continue Reading When is a Safety Component of Radio Equipment a High-Risk AI System Under the EU Artificial Intelligence Act?

On May 7, 2025, the European Commission published a Q&A on the AI literacy obligation under Article 4 of the AI Act (the “Q&A”).  The Q&A builds upon the Commission’s guidance on AI literacy provided in its webinar in February 2025, covered in our earlier blog here.  Among other things, the Commission clarifies that the AI literacy obligation started to apply from February 2, 2025, but that the national market surveillance authorities tasked with supervising and enforcing the obligation will start doing so from August 3, 2026 onwards.Continue Reading European Commission Publishes Q&A on AI Literacy

The “market” for AI contracting terms continues to evolve, and whilst there is no standardised approach (as much will depend on the use cases, technical features and commercial terms), a number of attempts have been made to put forward contracting models. One of the latest being from the EU’s Community of Practice on Public Procurement of AI, which published an updated version of its non-binding EU AI Model Contractual Clauses (“MCC-AI”) on March 5, 2025. The MCC-AI are template contractual clauses intended to be used by public organizations that procure AI systems developed by external suppliers.  An initial draft had been published in September 2023.  This latest version has been updated to align with the EU AI Act, which entered into force on August 1, 2024 but whose terms apply gradually in a staggered manner.  Two templates are available: one for public procurement of “high-risk” AI systems, and another for non-high-risk AI systems. A commentary, which provides guidance on how to use the MCC-AI, is also available.Continue Reading EU’s Community of Practice Publishes Updated AI Model Contractual Clauses

On November 8, 2024, the UK’s communications regulator, the Office of Communications (“Ofcom”) published an open letter to online service providers operating in the UK regarding the Online Safety Act (“OSA”) and generative AI (the “Open Letter”).  In the Open Letter, Ofcom reminds online service providers that generative AI tools, such as chatbots and search assistants may fall within the scope of regulated services under the OSA.  More recently, Ofcom also published several pieces of guidance (some of which are under consultation) that include further commentary on how the OSA applies to generative AI services.Continue Reading Ofcom Explains How the UK Online Safety Act Will Apply to Generative AI

On February 20, 2025, the European Commission’s AI Office held a webinar explaining the AI literacy obligation under Article 4 of the EU’s AI Act.  This obligation started to apply on February 2, 2025.  At this webinar, the Commission highlighted the recently published repository of AI literacy practices.  This repository compiles the practices that some AI Pact companies have adopted to ensure a sufficient level of AI literacy in their workforce.  Continue Reading European Commission Provides Guidance on AI Literacy Requirement under the EU AI Act

On February 7, 2025, the OECD launched a voluntary framework for companies to report on their efforts to promote safe, secure and trustworthy AI.  This global reporting framework is intended to monitor and support the application of the International Code of Conduct for Organisations Developing Advanced AI Systems delivered by the 2023 G7 Hiroshima AI Process (“HAIP Code of Conduct”).*  Organizations can choose to comply with the HAIP Code of Conduct and participate in the HAIP reporting framework on a voluntary basis.  This reporting framework will allow participating organizations that comply with the HAIP Code of Conduct to showcase the efforts they have made towards ensuring responsible AI practices – in a way that is standardized and comparable with other companies.Continue Reading OECD Launches Voluntary Reporting Framework on AI Risk Management Practices