Skip to main content

Innovation opportunities – What innovation might the ICO want to see in agentic AI?

Contents

We would like to identify, encourage and support opportunities for innovation within agentic AI that support data protection and information rights.

Taking advantage of these opportunities may require related developments in the capabilities and reliability of agentic AI, as well as appropriate testing and risk assessment. We see this as part of moving us closer to the high-capability scenarios earlier in the report. Each of these contexts requires thinking about how to develop innovations safely and responsibly.

We will address innovation opportunities proactively as agentic AI matures and our role in regulating it develops. As part of this, we will:

  • invite developers to use our Innovation Advice service. This is a free, fast and direct service for organisations doing new or innovative things with personal information. It can give advice to help solve the data protection issues holding up the progress of new products, services or business models; and
  • continue to invite innovators to work with our Regulatory Sandbox to engineer data protection into these technologies from the outset, focusing on the most innovative propositions.

Additionally, we have identified several areas where innovation in agentic AI could contribute towards privacy-positive outcomes. These are discussed below.

Data protection compliant agents

Agent governance systems typically focus on aligning agent norms with ‘human values’. Some of these systems focus on legal, moral or ethical bases. There are fundamental questions in responsible AI about: 

  • which values get built into AI systems;
  • how these should change over time or across jurisdictions; and
  • how they are assessed.

Agentic systems must never take actions that break the law. Data protection compliance is a legal requirement and a minimum threshold that all developers and deployers must comply with.

Part of the novelty of agentic systems is the use of agentic AI for planning tasks. As well as agentic systems being legally compliant, we are interested in approaches to:

  • ensuring that they generate outputs (eg plans or proposed solutions to tasks) that comply with data protection law; and
  • how to embed legal obligations in agentic AI (eg data protection by design and by default) at a fundamental level.

Such approaches to responsible data protection compliance could be a market differentiator for those AI agents and agentic systems that can demonstrate it.

Agentic controls

The governance of agentic systems will require a range of tools to manage intellectual property, cybersecurity or public relations. Such tools will likely also play a role in supporting data protection. These tools could include tools for monitoring, auditing, explainability, permission structures, authentication and data access protocols. Valuable areas of research would include:

  • the extent and types of personal information storage and processing needed to deploy and govern agentic systems;
  • methods for privacy-enhancing agentic governance;
  • privacy-enhancing communication between AI agents; and
  • mechanisms for redress and restitution, particularly in agent ecosystems – ways to understand how something has gone wrong and how to put it right.

Layered or tiered protections, which cascade from fundamental protections implemented by suppliers, could incorporate principles such as privacy by design and default, or security-first design. Organisations could build organisation-specific controls on top of these by setting out which information sources the agents could access, and with what permissions. Protections implemented by individual users could give them granular control over what information is shared with the agentic system.

Privacy and personal information management agents

Some people find actively managing their personal information and protecting their privacy difficult. It can be complicated, made harder by dark patterns, and can go wrong in ways that are not easily apparent to the person involved. Our work on damaging website design practices and research on cookies show that this often leads to outcomes that diverge from people’s desired privacy choices.

We are keen to explore the possibility of developing AI agents that take on some of this burden, for example, by interpreting website privacy policies or cookie statements and comparing them to their users’ preferences. In survey research we commissioned, two in five UK adults admitted to never reading cookie policies or settings. In this context, an agent designed to be a more vigilant guardian of personal information than a person themselves would be an impactful innovation.

Local agents and trusted computing

Agentic AI may be part of the solution to keep transactions and tasks confidential. For example, a correctly configured agent might be able to act as a proxy for a real person who wants to remain confidential.

We anticipate innovation opportunities for agents to conduct research tasks where information is held locally (eg on a device) and report back their findings (perhaps a yes or no decision for an application) to their user. For example, a person’s agent may be able to process several contract offers and pick an appropriate one, without providing personal information to the potential providers. These applications would require the combination of agentic approaches with privacy-enhancing technologies.

Freedom of information and data protection agents

Compiling responses to freedom of information (FOI) requests can be time-consuming and complex for organisations. As agents develop, they may become more skilled at processing large amounts of information in public records. This could allow organisations to identify relevant information more quickly. Agents could also:

  • help in triaging by categorising incoming requests and flagging them to relevant parts of the organisation; and
  • monitor organisation response times and help in complying with deadlines.

Agents could play a similar role in responding to requests related to data protection rights. They could help organisations search for and identify relevant information. This would help organisations in responding to requests from people seeking copies of their information or requests for correction and deletion. Developers of such technologies would need to consider how to mitigate hallucinations. Deploying organisations would need processes to ensure that they did not provide people with hallucinated or incorrect information.

Benchmarks and evaluations for agents

Benchmarks are standardised tests of evaluation criteria used to measure or compare the performance of an AI system across specific tasks or capabilities. AI developers use benchmarks during model development to refine and compare their models. There are competitions to score highly on key public benchmarks.

Third parties can use benchmarks when selecting between different AI tools. Benchmarks can measure an AI system’s ability to solve math problems, orchestrate complex tasks or recognise images. For AI agents, benchmarks involve more complex tasks that assess reasoning, the ability to handle different forms of information and the ability to use tools. Designers often design AI agents to simulate real-world tasks.

We would welcome innovation in methods for the practical evaluation of the compliance of agentic AI systems with data protection legislation. This would ideally include both:

  • compliance of the agentic system itself (eg how the organisation set it up, what training information it uses or what personal information it processes); and
  • compatibility of any actions that the system advises or takes with data protection law.

More broadly, research on which tasks agents can and cannot perform well would enable organisations to make informed decisions about choices and risk assessments when deploying agents.