Skip to main content

Data protection and privacy risks

Contents

Many of the data protection issues with agentic AI applications are similar to those raised by other types of AI, and in particular, generative AI. In some cases, however, the characteristics and capabilities of agentic AI could exacerbate existing data protection issues or introduce new ones. We are starting to see these risks emerge with current agentic AI systems. We anticipate these risks growing with increasing capability and adoption of agentic AI, unless efforts are made to mitigate them.

Data protection is part of a wider set of considerations when new and emerging technologies are being developed, and how data protection law is applied will affect social and economic outcomes. Whilst our focus is data protection issues, we know that these are not isolated from society and the economy. Potential harms and risks from agentic AI such as the following are out of scope for this report:

  • the economic impact of job losses;
  • environmental impacts;
  • impacts on competition; and
  • collective disempowerment.

However, we will continue to ensure we are working with stakeholders to understand where data protection touches upon these issues.

Human responsibility and controllership

Despite the language of ‘agency’ and ‘agents’, and the hype around agentic AI, agentic AI systems will not be conscious or have intent within the next two to five years. They are not and should not be considered as legal entities, even if organisations using agentic AI may seek to blame it for errors. In the context of data protection, AI agency does not mean the removal of human, and therefore organisational, responsibility for data processing. Organisations must be clear on the expectations that still apply under data protection legislation. They remain responsible for using personal information in an appropriate fashion.

Today, many agents under development give the organisation or person deploying them control over factors such as:

  • the actions the agent is authorised to take. For example, a person could authorise an agent to automatically make purchases up to a certain financial amount. The agent would need to seek permission to make purchases above that amount. A person could also configure an agent to automatically block illegal purchases; and
  • the information the agent can access. For example, an organisation could configure an agent to only access specific internal information for certain tasks, rather than calling on external tools. Or they could configure an agent to request permission before accessing certain personal information.

Many current agents also have features that allow humans to review what they are doing in real time. Examples include the ability to watch an agent navigate a browser, or a window where the agent describes what it is doing.

However, the future extent of human involvement in agentic AI actions is unclear. As the level of agency, the unpredictability of the environment and the software’s ability to learn increase, it becomes harder to fully specify what a user wants an agent to do. Using LLMs to manage context and assumptions and allow natural language interfaces is part of this advance, taking agentic AI beyond simple automation. LLMs allow for tasks to be described much less specifically than traditional programming approaches, which require programmers to set out steps in detail.

This new development means that an agentic system may misinterpret instructions or do something unexpected. In multi-agent environments, agents may be communicating, collaborating and possibly even colluding. 29

At best, failures result in poor performance of the agentic system, and at worst, the user of the system or third parties may experience loss or harm. This is likely to be exacerbated by the fact that, by design, agentic systems operate with limited human involvement. The intentional or gradual reduction of oversight may mean that these unintended consequences and harms persist for a longer period before being noticed and rectified. To mitigate and minimise these instances, governance systems have been developed and proposed, including measures to preserve privacy.

Increasing agency means that developers and deployers of agentic systems don’t have full control over the behaviour of those systems. But they retain control over whether to deploy those systems or not, and the level of risk they tolerate in doing so.

Some experts argue that we should not develop fully autonomous AI – systems capable of writing and executing their own code beyond predefined constraints – because doing so significantly increases risk. 30 Others say that full autonomy for AI systems is not a desirable goal, precisely because it means a loss of control for the deployer or user. These commentators note that this might be avoidable if the proper controls and constraints are put in place. 31 It will be essential to identify one or more data controllers to ensure these controls are effective.

Governance

Measures to govern agents and agentic systems must be flexible enough to handle changes in priorities, goals, tasks and the environment in which the agent is operating. They must also consider how these systems might develop in future, as capabilities and functionality advance. Proposed governance schemes include the Safer Agentic AI Foundations, from the Agentic AI Safety Community of Practice. This scheme covers an end-to-end overview of securing agentic operations, with a heavy focus on continuous documentation, monitoring and review.

Furthermore, organisations are likely to retain many of these governance responsibilities. Placing the sole responsibility for creating, applying and maintaining these value-derived governance frameworks on the end user would not be universally applicable or suitable. In the case of agents available to the public, it is unlikely that members of the public have the knowledge or skills to:

  • apply or update these frameworks without help; or
  • address any issues, particularly if an agentic system is working on tasks or at speeds or complexities which may be hard to understand.

This highlights the responsibility of suppliers of those systems, both to:

  • employ good governance before the point of sale; and
  • ensure that the systems they provide are suitable for customers and their tasks.

Automated decision-making

Developers and organisations may include automated decision-making (ADM) in agentic AI systems as they seek rapid automation of increasingly complex tasks. Effective ADM implies minimal human involvement in the decision. However, that may have significant or legal effects on them. This is part of the ADM provisions.

The legislation also requires people to be informed about important automated decisions made about them. It gives them the right to contest those decisions and ask for a human to intervene in the decision-making.

Organisations using agentic AI to make decisions about people would need to consider:

  • how the decisions impact people;
  • how to clearly communicate the use of automation to the people affected;
  • how to put in place systems that allow people to contest decisions; and
  • how to effectively and meaningfully allow humans to intervene in agentic AI decision-making.

Organisations building or deploying agentic AI must be aware of their obligations under data protection legislation.

We will provide clarity on our regulatory expectations around agentic AI as part of the development of the code of practice on AI and ADM. We are planning to release an interim update on ADM and Profiling at the beginning of 2026.

Purpose limitation and data minimisation

Purpose limitation

  • they are clear and open about why they are collecting personal information; and
  • what they intend to do with the personal information is in line with people’s reasonable expectations.

This is important because there is a risk that organisations could set purposes too broadly in order to encompass all potential operations.

Organisations must:

  • have a clear purpose for collecting and processing information used by an agentic AI system; and
  • communicate that purpose clearly.

This includes information collected during the operation of the system as well as during the development phase.

As with other types of AI development, agentic AI development and use involve different processing stages. Each stage involves processing different types of personal information for different purposes. Having a specified purpose in each stage allows an organisation to:

  • understand the scope of each processing activity;
  • evaluate its compliance with data protection; and
  • help them evidence that.

We have published a response to the call for views on generative AI and how the purpose limitation principle should be interpreted within its lifecycle.

Organisations should take appropriate technical and governance measures to ensure agentic AI systems operate appropriately, using information in a way that people can reasonably expect. We explore this further below. Organisations looking to deploy agentic AI and systems should be aware of the guidance that we are producing in this area.

Data minimisation

Under article 5(1)(c) of the UK GDPR, personal information must be:

adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed (data minimisation).

While we provide guidance on data minimisation, our stakeholder engagement and research highlighted diverging perspectives in this area. Many emphasised that granular user controls are important. This refers to an organisation or person choosing upfront what information and systems an agent is permitted access to for a specific purpose.

Some stakeholders suggested that people may progressively choose to open access to additional information, once the system has proven it functions correctly. Others favour a more open approach while acknowledging the importance of controls, particularly in specific contexts where agents have access to a wider variety of information upfront. This approach considers that access to context helps an agentic system learn and perform better and deliver more personalised results.

Stakeholders noted that they expect these issues to compound as the technology evolves. Developers may design some agentic AI systems to be open-ended (rather than highly specialised) and flexible in approaching problems. With growing demands to personalise systems, this may create challenges for complying with data minimisation.

Data minimisation is closely related to the purpose limitation principle explored above. Once an organisation using an agentic AI system defines the purpose for using it, it can establish the type and volume of information needed to fulfil that purpose. Data protection law requires organisations to process only the personal information needed to achieve the specified purpose.

Organisations should not give agentic AI systems access to information just because it might be useful in the future. They must have a justifiable reason for the agentic AI to access and use the information.

It is currently unclear what technical measures may be implemented in future to achieve both effective delivery of goals and appropriate processing. However, our stakeholder engagement highlighted that organisations are already considering how they approach data minimisation as agents evolve.

Careful selection of the tools and databases an agentic system has access to will be important. Developers can also design systems with other controls, such as asking a human for permission when an agent needs to access personal information. Other approaches (eg masking of personal information, age verification, system permissions, observability techniques and transparency notices) may support this and are expected as part of good governance practice.

Some stakeholders have noted that lack of access to enough data, systems and tools would significantly limit what agentic systems can do. However, compliant future development of agentic systems will rely on organisations developing and implementing tools to effectively manage access to personal information. This process is equivalent to the principle of least privilege used in many organisations.

Rapid generation of personal information by agentic AI systems

As well as processing large quantities of personal information, agentic systems may create additional data protection issues through their ability to rapidly infer and create new personal information at scale. In some instances, this may constitute profiling if it evaluates certain personal aspects relating to a person. Furthermore, when agentic AI systems make legal or significant decisions, those decisions may be considered to be ADM. This raises issues explored above and discussed in our recent call for evidence about generative AI.

Organisations will need to effectively and appropriately manage significant and growing quantities of personal information to ensure that they are compliant with data minimisation requirements. This involves:

  • meeting expectations of effective technical and organisational measures;
  • upholding data rights; and
  • complying with data protection requirements (eg storage limitation, security and data minimisation).

Agentic AI systems increase the chance of ‘cascading hallucinations’. Hallucinations occur when an AI agent generates inaccurate but plausible information (potentially including personal information). The agent may share this information through:

  • the tools or databases it has access to; or
  • interactions with other agents.

This may cascade the inaccurate information across multiple locations or through the stages of a decision-making process. In such circumstances, compliance with the accuracy principle of data protection may become harder. It could also significantly raise the potential for serious harms to occur when processing sensitive personal information or services.

Special category data and agentic AI

Agentic systems may draw upon or generate sensitive information, such as special category data , in unexpected ways in the pursuit of open-ended goals. While some deployments of agentic AI may use special category data by design (eg the management of healthcare records and treatment decisions), other cases may not be obvious from an initial goal. The organisation is responsible for considering:

  • the purpose of processing;
  • the users’ preferences (where the organisation can tailor preferences to deliver different levels of service); and
  • when this may include the processing of special category data , which involves enhanced requirements under data protection legislation.

It is possible that, even with a purpose that does not involve processing special category data, agentic systems may still use information to infer and use special category data to make a decision. Organisations should assess whether their agentic system has the potential to infer and use special category data in pursuit of its purpose. If it does, organisations should ensure:

  • that they have a valid lawful basis and an article 9 (of the UK GDPR) condition for this processing; and
  • that people are aware of the possibility of inference and use of their special category data.

Alternatively, organisations could consider technical measures to restrict the agentic system's ability to infer and use this type of data.

Explicit consent as a basis for processing special category data may prove difficult to achieve when deploying agentic AI and systems unless people have a genuine choice – for example, in situations in which the agentic system's services can be used appropriately without special category data or with greater personalisation if the user chooses.

This is already an established issue for wider forms of AI such as LLMs. The highly complex data flows of agentic systems are likely to make it harder to meet the expectations necessary for transparency. They also complicate the enactment of rights, such as removing information when a person withdraws the consent that is necessary for explicit consent.

Organisations should review our guidance on special category data and consent as a basis for processing.

Transparency and explainability

The transparency principle requires organisations that process personal information to be clear, open and honest from the start about:

  • who they are;
  • how and why they use people’s personal information; and
  • the data rights available to these people.

See our guidance on lawfulness, fairness and transparency.

The principle applies even if the organisation has no direct relationship with the people whose information they are collecting, or when they collect the personal information from another source. If organisations are not transparent, there is a risk of people being unaware that their information is being processed. This is called ‘invisible processing’ and could prevent people from being able to exercise their information rights.

Complex information flows make the potential development of agentic AI particularly relevant as these systems grow in capability. In this ecosystem, information may be:

  • shared at different stages, among different agents; and
  • developed by different organisations, to make different decisions.

As with non-agentic AI (and all processing activities), any lack of transparency and explainability in foundational models or agent actions may result in unintended harms to the people affected.

As the capability of future agentic systems increases, there is a risk that these systems may autonomously generate new ways of using personal information. For example, they may:

  • autonomously process information they need to perform a task, which might be beyond what developers anticipated;
  • process personal information to pursue its human-set objective in a way that the person the information is about may not reasonably expect;
  • seek out new personal information without a person’s knowledge; or
  • re-purpose information originally collected for a different purpose.

There are already cases of AI agents acting in unexpected ways. 32 These issues could result in a system making decisions in ways an organisation did not foresee at the time of the system’s deployment.

The growing emergence of agent-to-agent communication that isn’t visible to human observers may also significantly impact transparency. This is because it will become increasingly difficult to understand how and where information is processed. It could also impact the appropriate implementation of human intervention and data rights (both explored below).

While the autonomous nature of future agentic systems could pose challenges to organisations for transparency, the organisations’ obligations as data controllers remain the same. To ensure that personal information is processed transparently in agentic systems, organisations must consider their obligations and how they will meet them before they begin processing.

In line with our guidance on data protection impact assessments (DPIAs), if organisations assess that deploying agentic systems will result in a high risk to people’s information, they must carry out a DPIA. Organisations must provide relevant privacy information when they collect personal information directly from people. If they collect the information from other sources, organisations must provide privacy information to people within a reasonable period. See our guidance on ensuring transparency in AI.

Accountability

Linked to the need for transparency is the organisation’s ability to trace the use of personal information and provide accountability. Technical approaches already exist that enable organisations to review what an agent is doing either in real time or retrospectively. Some stakeholders emphasised that existing explainability approaches are still relevant to agentic systems. Accountability-based approaches are expected to evolve. However, it’s not clear whether, in practice, organisations will be able to effectively monitor an increasing number of agents. They may, for example, need to rely on supervisor agents.

Organisations must give further consideration to the dual demands of performance and accountability. For example, one potential way to increase an AI agent’s statistical accuracy is to create multiple, focused agents trained for specific tasks. However, this increased technical complexity (eg multiple opaque tool calls or unplanned execution) could make it much harder to understand:

  • what data the system is using; and
  • how and where it’s making decisions.

This might reduce transparency for people about how, where and why organisations are using their information, which would limit their ability to exercise their data rights.

Accuracy

Article 5(1)(d) of the UK GDPR states that personal information must be ‘accurate’ and rectified promptly where this is not the case. While the UK GDPR does not define ‘accuracy’, the Data Protection Act 2018 says that ‘inaccurate’ means “incorrect or misleading as to any matter of fact”.

Approaches such as chain of thought and retrieval augmented generation (RAG), as well as further context-specific fine-tuning, can enhance the accuracy of LLMs. However, the fundamental issue remains that these models predict text in a probabilistic manner – they don’t logically deduce findings from the world and through reason. This leads to the widely explored issue of hallucinations. A hallucination is where LLMs describe (often convincingly) invented or misinterpreted facts.

As noted above, increased statistical accuracy is a key focus of emerging models and systems, although how this will be achieved is unclear. Even if this can be realised, the more immediate issue remains that different contexts may require different levels of statistical accuracy depending on the risk of harms. How these situations can be monitored and responded to appropriately is a key issue for organisations using agentic systems.

The matter of accuracy may become increasingly important in agentic AIs as they come to make greater use of previous information, and decisions made using personal information, to achieve a goal. Stakeholders have noted that inaccurate information held in an agentic system’s memory (either acquired or hallucinated) is likely to have a significant effect across multiple decisions. Whether agentic AI systems hold inaccurate data in long- or short-term memory can also define how complex the technical solution may be. Solutions can range from a simple reset of preferences to more fundamental fixes in a model. This may lead to risk of harms to users, from simple but easily correctable errors to significant decisions being incorrectly made.

We have already provided regulatory certainty about problems that these hallucinations can cause from a data protection and privacy perspective. See our response to our call for views relating to generative AI.

At a high level, hallucinations risk large-scale generation of inaccurate information that can become rapidly embedded in systems. These can lead to opaque and unfair decisions that might cause harms, including financial loss or even physical or legal harms. Agentic systems risk supercharging this issue. Given the increased number of actions and decisions they require to achieve a goal, they could reduce the opportunity for meaningful human intervention (and correction).

Individual information rights and fairness

Organisations developing and deploying agentic systems must be aware of the requirement to implement data protection by design if those systems will be processing personal information. Under UK GDPR, people have various rights, including:

  • the right of access;
  • the right to erasure;
  • the right to rectification;
  • the right to object;
  • the right to portability; and
  • rights about automated decisions.

Organisations processing personal information in agentic systems must consider how this processing could impact people’s individual rights. Organisations should implement data protection by design. Otherwise, the opaque data flows, ADM and multi-agent interactions that characterise advanced agentic systems may make it harder for people to exercise their rights.

For example, without fit-for-purpose technical and organisational measures, the complex information flows within a future agentic system could make identifying information held for an individual user increasingly difficult. There is a risk that this could make it more difficult for people to exercise their right to access copies of their information.

Unless organisations take appropriate measures, the growing complexity of agentic systems may also impact people’s ability to exercise their right to rectification (correction of inaccurate personal information). The complexity could make it harder to determine the source of personal information. For example, an agentic system used by a lettings company might assess the reliability of a potential tenant based on incorrect information. An existing system may draw information from data sources approved by the organisation. However, a future agentic system could autonomously draw information from a vast number of data sets. It could also use these sources to create inferences about the person. This means the original error and its effects may be difficult to both identify and amend.

Organisations deploying agentic AI must uphold individual rights and design agentic systems with data protection by design and default. This will enable transparency, accountability and meaningful human intervention.

Fairness

The principle of fairness in data protection is based on the premise that organisations should not process personal information in a way that is unduly detrimental, unexpected or misleading to the people concerned. The ADM processes that agentic systems use may diverge from the original scope the organisation envisioned as a system reacts to and learns from its environment. Organisations must take care that agentic systems do not process personal information in ways that people do not expect in practice. See our guidance on lawfulness, fairness and transparency.

The role of the data protection officer

The data protection officer (DPO) monitors an organisation’s compliance with relevant data protection laws. They also advise on data protection obligations and provide insight on DPIAs.

Many of the data protection obligations about the use of agentic systems are the same as with other AI applications, and we have previously published guidance on AI and data protection. Integrating agentic AI into an organisation has the potential to create novel risks and opportunities for DPOs. However, controller-processor obligations remain applicable to organisations deploying autonomous agentic systems.

Challenges in maintaining oversight over novel processing

It may become difficult for organisations to maintain DPO oversight over new processing if they are encouraging staff to experiment with agents. For example, employees trying out new uses for more autonomous agents in their day-to-day work may quickly end up processing personal information in various unanticipated ways. The use of shadow or short-term AI agents (ie agents spun up for immediate tasks and then torn down just as quickly) further compounds the oversight considerations.

The challenges become more acute if agents:

  • have freedom to access a variety of personal information held by the organisation; or
  • are permitted to draw on external sources outside of the organisation’s controlled data sources and systems.

Multi-agent systems raise a problem of compound loss of privacy. Multi-agent interaction across multiple systems (rather than within them) is likely to amplify issues of accountability, statistical accuracy and security risks. This may persist even when organisations have taken best steps to ensure data protection by design. Varying approaches to security, transparency and interoperability could lead to multi-agent use significantly increasing the risk of data breaches.

If DPOs and other governance teams develop effective governance structures and well-defined parameters for employees using agents, this may mitigate some risks. The increasing integration of agents across an organisation, the potential expansion of activities they undertake and the significant security risks could require greater collaboration between the DPO and chief information security officer (CISO).

Increased complexity of documenting decision-making

Agentic systems may be operating at pace, and they can be susceptible to goal or decision-making ‘drift’. As an agent responds to its environment and learns from the information it collects from the data sources it accesses, it may need to document its decisions as part of governance processes. This documentation would need to be readable, verifiable and accurate.

Organisations may need a separate, standalone monitoring system (or agent) to monitor logs, interpret them and intervene as necessary. This could enable the organisation to verify that the system is still acting according to its governance structure. The autonomous nature of agentic systems adds further complexity to this requirement, because systems will be making decisions without being directly observed.

Evolving role of the DPO

Some researchers have predicted that the control and supervision of AI agents could increasingly become a part of people’s jobs. If an organisation widely uses agentic AI, it is plausible that the role of the DPO could also evolve.

We already see a degree of automation for tasks (eg subject access requests, cookie consent management or breach reporting). There is a significant existing market for privacy technology. This is increasing in regulatory technology sectors, such as financial services. Agentic AI could extend this further, through the addition of so-called ‘virtual employees’ supporting the human DPO.

One possible future envisions ‘DPO agents’; systems integrated into privacy teams to scale and augment the role of human staff. Organisations could use them to:

  • upscale oversight of the data and processing in an organisation;
  • scan proactively and flag up new or high-risk processing activities across an organisation;
  • help with specific DPO tasks (eg reviewing, identifying and assessing high-risk suppliers based on internal governance criteria); or
  • detect breaches by analysing relevant regulations (domestic and international), identify compromised data, and suggest next steps for and collaborate with DPOs.33

In such an arrangement, the future role of a human DPO could shift towards orchestrating and managing a team of ‘data protection agents’. This could include defining the boundaries of what an agent can or can’t do and what systems it has access to. This future envisages DPOs drawing on agent insights to focus their attention and engagement on an organisation’s highest-risk processing.

Agentic AI security threats and mitigations

Data protection law requires organisations to protect the information they are processing by means of appropriate technical and organisational measures. See our guidance on data protection principles. The security principle of data protection applies to any personal information processed by agentic systems. The security principle also requires organisations to ensure the “confidentiality, integrity and availability” of information processed.

Agentic AI, as with any other system connected to the internet, may be subject to attack by malicious third parties. The autonomy of agentic systems and their ability to perceive and learn from their environment presents novel opportunities for compromise. This puts at risk any personal information held within and processed by these systems.

The features of agentic AI and agentic systems that differentiate them from other forms of AI present potential novel opportunities for attack, for example:

  • distorting goals;
  • attacking the reasoning of an agent;
  • manipulating reasoning; or
  • poisoning data in the system’s memory.

Potential taxonomies of threats and proactive and reactive security measures are already under development, including the Open Web Application Security Project’s threats and mitigations list. These identify new attack surfaces that agentic AI might introduce into an organisation, as well as documenting security from first principles to the eventual shutdown of the system.

Agent business models and the concentration of personal information

Some potential privacy risks from agentic AI arise from business models and product design decisions rather than the inherent nature of the technology. Personal assistant agents are one possible model for products based on agentic AI. These agents are frequently envisioned as general-purpose assistants that handle a wide range of personal tasks and act as an interface between the user and the digital world.

To be effective, this type of agent might need access to information about a person, their preferences, their environment and previous behaviour. This may also include information about third parties the user interacts with. It would also need access to digital tools (eg communications, calendars, accounts, IoT devices, and possibly even the user’s ID). This might involve personalisation, combining information from many separate services and granting the agent access to secure services.

However, this has the potential to undermine data protection controls, including security and encryption. This creates an opportunity for the agentic system to accumulate extensive personal information, increasing the risk of surveillance and data breaches. As we do not anticipate most people would create their own agents, they would need to source agentic assistants from third-party providers.

Demand for personalisation may lead to the embedding of personal information within models. Should that happen, and the models are either publicly available or become shared more widely, there is a risk that third parties may extract the embedded personal information from that model.

29 Arxiv article on secret collusion among AI Agents

30 Arxiv article on why fully autonomous AI Agents should not be developed, p.1

31 Arxiv article on characterizing AI Agents for alignment and governance, p.8

32 BBC News article on AI Agents going rogue

33 See, for example: An announcement from Onetrust about its first data privacy breach response agent