AI agents have arrived. Although the technology is not new, agents are rapidly becoming more sophisticated—capable of operating with greater autonomy, executing multi-step tasks, and interacting with other agents in ways that were largely theoretical just a few years ago. Organizations are already deploying agentic AI across software development, workflow automation, customer service, and e-commerce, with more ambitious applications on the horizon. As these systems grow in capability and prevalence, a pressing question has emerged: can existing legal frameworks—generally designed with human decision-makers in mind—be applied coherently to machines that operate with significant independence?
In January 2026, as part of its Tech Futures series, the UK Information Commissioner’s Office (“ICO”) published a report setting out its early thinking on the data protection implications of agentic AI. The report explicitly states that it is not intended to constitute “guidance” or “formal regulatory expectations.” Nevertheless, it provides meaningful insight into the ICO’s emerging view of agentic AI and its approach to applying data protection obligations to this context—insight that may foreshadow the regulator’s direction of travel.
The full report is lengthy and worth the read. This blog focuses on the data protection and privacy risks identified by the ICO, with the aim of helping product and legal teams anticipate potential regulatory issues early in the development process.Continue Reading ICO Shares Early Views on Agentic AI & Data Protection