Data Privacy

On March 2, 2026, the UK Department for Science, Innovation and Technology (“DSIT”) launched its consultation, titled “Growing up in the online world: a national conversation”. The consultation is open until 26 May 2026, after which the government will publish a summary of responses and its proposed approach. DSIT has indicated that it intends to move quickly on the consultation’s findings, drawing on newly granted powers that allow for accelerated implementation of online safety measures.

The consultation seeks views on a wide range of potential measures to strengthen children’s safety and wellbeing online, including more robust age‑assurance mechanisms, a statutory minimum age for social media, raising the UK’s age of digital consent, restrictions on certain features (such as livestreaming and disappearing messages), and new obligations for AI chatbots and generative‑AI services.

DSIT’s proposals could significantly expand regulatory expectations beyond the Online Safety Act 2023 (“OSA”)—including potential age‑based access limits (including differing safeguards as between teens and younger children), feature‑level restrictions, and enhanced duties for AI‑enabled services. Early engagement will be important to ensure that the government takes account of the views of affected service providers and understands the operational and technical implications of the measures proposed.Continue Reading UK Government Launches Consultation on Children’s Online Experiences, Including New Obligations for AI

AI agents have arrived. Although the technology is not new, agents are rapidly becoming more sophisticated—capable of operating with greater autonomy, executing multi-step tasks, and interacting with other agents in ways that were largely theoretical just a few years ago. Organizations are already deploying agentic AI across software development, workflow automation, customer service, and e-commerce, with more ambitious applications on the horizon. As these systems grow in capability and prevalence, a pressing question has emerged: can existing legal frameworks—generally designed with human decision-makers in mind—be applied coherently to machines that operate with significant independence?

In January 2026, as part of its Tech Futures series, the UK Information Commissioner’s Office (“ICO”) published a report setting out its early thinking on the data protection implications of agentic AI. The report explicitly states that it is not intended to constitute “guidance” or “formal regulatory expectations.” Nevertheless, it provides meaningful insight into the ICO’s emerging view of agentic AI and its approach to applying data protection obligations to this context—insight that may foreshadow the regulator’s direction of travel.

The full report is lengthy and worth the read. This blog focuses on the data protection and privacy risks identified by the ICO, with the aim of helping product and legal teams anticipate potential regulatory issues early in the development process.Continue Reading ICO Shares Early Views on Agentic AI & Data Protection

In a new post on Inside Privacy, our colleagues discuss the California Attorney General’s announcement of a $530,000 settlement with Sling TV over alleged violations of the California Consumer Privacy Act (CCPA) and Unfair Competition Law. This is the first enforcement action arising from the California Department of Justice’s (“DOJ”) investigative sweep of streaming

Continue Reading California Attorney General Announces $530,000 CCPA Settlement with Sling TV

The California Civil Rights Council and the California Privacy Protection Agency have recently passed regulations that impose requirements on employers who use “automated-decision systems” or “automated decisionmaking technology,” respectively, in employment decisions or certain HR processes. On the legislative side, the California Legislature passed SB 7, which would impose additional obligations on employers who

Continue Reading Navigating California’s New and Emerging AI Employment Regulations

In a new post on the Covington Inside Privacy blog, our colleagues provide an overview of the Federal Trade Commission’s (“FTC”) $45 million settlement with online lead generator MediaAlpha, Inc. and its subsidiary QuoteLab, LLC (collectively, “MediaAlpha”), resolving allegations that the companies, among other things, tricked consumers into sharing sensitive personal information under the guise

Continue Reading FTC Takes Aim at Online Lead Generator

On June 5, 2025, the UK’s Information Commissioner’s Office (“ICO”) launched its new AI and biometrics strategy. The strategy aims to increase its scrutiny of AI and biometric technologies focusing on three priority situations, namely where: stakes are high; there is clear public concern for the technology; and regulatory clarity can provide immediate impact.

The ICO identified three areas of focus in its strategy:

  1. Transparency and explainability, i.e., when and how the technologies affect people;
  2. Bias and discrimination, particularly where the technologies have been trained on “flawed, incomplete or unrepresentative information”; and
  3. Rights and redress, i.e., making sure that systems are accurate, appropriate safeguards are in place to protect people’s rights, and that there are ways to challenge and correct outcomes that result in harm.

Continue Reading The ICO’s AI and biometrics strategy

On June 22, Texas Governor Greg Abbott (R) signed the Texas Responsible AI Governance Act (“TRAIGA”) (HB 149) into law.  The law, which takes effect on January 1, 2026, makes Texas the second state to enact comprehensive AI consumer protection legislation, following the 2024 enactment of the Colorado AI Act.  Unlike the

Continue Reading Texas Enacts AI Consumer Protection Law

This year, state lawmakers have introduced over a dozen bills to regulate “surveillance,” “personalized,” or “dynamic” pricing.  Although many of these proposals have failed as 2025 state legislative sessions come to a close, lawmakers in New York, California, and a handful of other states are moving forward with a range of different approaches.  These proposals

Continue Reading State Legislatures Advance Surveillance Pricing Regulations

On June 2, 2025, the Global Cross-Border Privacy Rules (“CBPR”) Forum officially launched the Global CBPR and Privacy Recognition for Processors (“PRP”) certifications.  Building on the existing Asia-Pacific Economic Cooperation (“APEC”) CBPR framework, the Global CBPR and PRP systems aim to extend privacy certifications beyond the APEC region.  They will allow controllers and processors to voluntarily undergo certification for their privacy and data governance measures under a framework that is recognized by many data protection authorities around the world.  The Global CBPR and PRP certifications are also expected to be recognized in multiple jurisdictions as a legitimizing mechanism for cross-border data transfers.Continue Reading Global CBPR and PRP Certifications Launched: A New International Data Transfer Mechanism