ICO tech futures: Agentic AI
Foreword
In this Tech Futures report on agentic AI, we set out our understanding of the emerging technology, including its potential uses and expected technical developments. We share our early thoughts about the data protection implications that organisations will have to consider as they explore the deployment of agentic AI, including data protection risks and opportunities. We share four possible scenarios to explore the uncertainty around how organisations might adopt agentic AI and how its capabilities might develop over the next two to five years.
Executive summary
Agentic artificial intelligence (AI) is evolving at pace, attracting intense scrutiny from innovators, technology adopters and regulators worldwide. As organisations consider deploying agentic AI, understanding its capabilities and the associated risks is essential.
Agentic AI combines the capabilities of generative AI with additional tools and new ways of interacting with the world. This increases the ability of AI systems to work with contextual information, operate using natural human language and automate more open-ended tasks. Agentic AI systems are being developed for use in research, coding, planning and transactions. Their potential applications span commerce, government, the workplace, cybersecurity, medicine and the consumer space. Many believe that agentic capabilities can form the foundation for powerful personal assistants.
While agentic AI offers some new technological capabilities we are at an early stage in development, with many use cases unproven or at the development stage. At the ICO, we are building a well-informed evidence base about:
- where the technology is now; and
- how to exercise caution about the proven abilities of agentic AI while identifying and managing the data protection issues and risks related to supporting privacy-led innovation.
As developing agentic AI increases the potential for automation, organisations remain responsible for data protection compliance of the agentic AI they develop, deploy or integrate in their systems and processes.
We have already explored in our consultation series on generative AI the many issues that agentic AI shares. Novel agentic AI data protection risks include:
- issues around determining controller and processor responsibilities through the agentic AI supply chain;
- rapid automation of increasingly complex tasks resulting in a larger amount of automated decision-making;
- purposes for agentic processing of personal information being set too broadly to allow for open-ended tasks and general-purpose agents;
- agentic systems processing personal information beyond what is necessary to achieve instructions or aims;
- potential unintended use or inference of special category data;
- increased complexity impacting transparency and the ease with which people can exercise their information rights;
- new threats to cyber security resulting from the nature of agentic AI; and
- concentration of personal information facilitating personal assistant agents.
One of our key findings from this initial work is that the specific design and architecture of agentic systems impact how data protection law applies and how people exercise their data protection rights. Choices such as the data and tools that a system can access and which governance and control measures to put in place really matter.
Poorly implemented agentic systems will increase the risks of data protection harms. For example, this could include systems that:
- have no clear purposes;
- are connected to databases not needed for their tasks; or
- have no measures in place to secure access, monitor or stop activity, or control the further sharing of information.
The importance of design and architecture also means that there are good opportunities for privacy by design and privacy-friendly innovation in agentic AI, and organisations should use them for responsible deployment. We are already seeing some features and tools intended to address privacy issues.
We have identified innovation opportunities with agentic AI that have the potential to support data protection and information rights and contribute to privacy-positive outcomes. Potential areas include:
- data protection compliant agents;
- agentic controls;
- privacy management agents;
- information governance agents; and
- ways to benchmark and evaluate agentic systems.
Due to the pace of development of agentic AI and the speed at which developers are experimenting, we are trying two new approaches with this report. We are using scenarios of four different potential futures to explore the uncertainty about how agentic AI might be adopted and how its capabilities might develop over the next two to five years.
The ICO’s role
Our aim at the Information Commissioner’s Office (ICO) is to ensure that innovation in agentic AI develops in ways that protect people’s information rights, while providing clarity and support for organisations. Our next steps on agentic AI include the following:
- Hosting workshops with industry to gather further information on agentic AI, including on agentic capabilities and adoption, and how industry is mitigating data protection and privacy risks.
- Updating guidance on automated decision-making and profiling, in light of the Data (Use and Access) Act, starting with public consultations in 2026.
- Working with partner regulators through the Digital Regulation Cooperation Forum (DRCF) to understand the cross-regulatory implications of agentic AI and invite innovators to participate in the Thematic Innovation Hub on agentic AI.
- Continuing our work with international partners through the G7 Data Protection Authorities Emerging Technologies Working Group.
- Inviting stakeholders working on agentic AI applications to access our innovation support services. For organisations that are in the process of developing innovative products and services using personal information and agentic AI in the public interest, we encourage them to explore our Regulatory Sandbox.
We would like to encourage and support data protection–focused opportunities in agentic AI. We will address innovation opportunities proactively as agentic AI matures and our role in regulating it develops.
We will keep our approach under review as technologies, markets and risks evolve.