Skip to main content

Scenarios for the future of agentic AI

Contents

Scenario planning

In this section, we use scenario planning to explore the potential near future of agentic AI development.

Scenario planning is a foresight method that aims to identify and manage key uncertainties. This involves creating a range of plausible future scenarios that encompass that uncertainty but still support flexible planning. Between them, the scenarios provide indicators about the direction of the near-future development of the technology.

We began by identifying drivers and trends to create scenarios which would describe possible futures. These allow us to examine some of the opportunities and risks agentic AI might present to data protection in those futures. These drivers and trends impact how agentic AI might develop, and they include technical considerations such as:

  • the cost of training AI models;
  • a drop in the cost of the compute power needed to run agentic systems (because of advances in hardware and improvements in AI algorithms); and
  • the availability of datasets to train AI.

We also considered social and economic factors, including:

  • the amount of investment into agentic AI;
  • an organisational fear of missing out or being left behind (driven by the hype cycle and marketing); and
  • the draw of potential organisational savings from reduced staff and labour costs.

During our research into trends driving adoption, we also saw indications that the adoption of agentic AI could follow a similar path to the proliferation of previous AI technologies (such as generative AI and LLMs). This could include its integration into other platforms (both online and on-device). You can find more detail on some of these trends in Annex II.

We selected two variables to create a four-by-four scenario grid. We chose the capabilities and the extent of adoption of agentic AI. Good variables for scenario planning should be uncertain (both high and low ends should be plausible). They should also have a high impact in terms of agentic AI, its information rights impact and its regulation.

The capabilities of AI agents. Researchers have identified a wide range of characteristics for evaluating agents and other tools. Capability is a general term we use to reflect a range of component characteristics, such as:

  • autonomy;
  • generalisability (ability to work across a range of different tasks);
  • ability to handle underspecified tasks;
  • goal complexity;
  • alignment;
  • reliability;
  • efficacy; and
  • the extent to which researchers have addressed the current limitations of LLMs.

We acknowledge that various capabilities may not increase at the same time. For example, we might see great progress in the speed of agentic AI, but no great advances in its reliability. For our purposes, a variable that bundles together capabilities is sufficient.

Capability is an important variable because of uncertainty and the significant hype around agentic AI. Many stakeholders told us that agentic AI is inevitable, but also discussed significant technological barriers to implementation. Some mentioned that the controls in place and choices made by organisations or people implementing them can impact a future agentic system’s capabilities.

There are already diverging approaches about what an effective, capable agentic system might look like in practice, and whether this is built around specialised, tightly controlled agents or general-purpose agents. Capability is likely to make a large difference to the social and economic impact of agentic AI and how it should be regulated. While the low end of capability is close to the status quo, the high end represents a step-change in technology.

At the lower end of this scale, we position AI agents that are not much more sophisticated than current chatbots and ADM tools. They likely remain based on LLMs and share most of the limitations of those models. The ability to learn and interact effectively with the environment may also remain limited, minimising physical and multi-modal outputs. AI agents remain task-specific and prone to unpredictable failures.

At the higher end of this scale, we anticipate technological developments to expand the capabilities of AI agents in many areas. They are capable of handling more complex problems, acting with more autonomy and in a wider range of contexts. Even at the higher end, we do not assume that agentic AI will unlock radical advances in AI research and development, causing an exponential increase in capability.

High capability is not superintelligence. However, in high capability scenarios, some agentic systems will be able to write and edit their own code. There will be tasks to which agentic AI is better or worse suited, and many areas where using agentic AI is inappropriate.

With capability, we are talking about the capability of agentic systems as a whole, not just the raw potential or abilities of agentic AI models. As set out in our scenarios, we assume that in the high capability scenario, some advances have been made in managing and controlling agentic AI systems. This reflects existing efforts and the practical desire from organisations to have reliable technologies.

Adoption of agentic AI. This variable captures the extent to which society will use agentic AI.

A low adoption scenario would see agentic AI used in limited ways, perhaps in certain sectors or for specific tasks, but not commonplace. A high adoption scenario would see agentic AI used nearly everywhere, and agentic capability integrated into many other systems. Our scenarios assume that AI will spread unevenly across all use cases and sectors.

The current hype around agentic AI suggests that adoption is inevitable; however, not all new technologies achieve high adoption. There are various reasons for this (eg vulnerabilities, lack of demand or inability to find a successful business model). Technical adoption can also be slow, especially when meaningful adoption requires changes to business and organisations, or even to social norms and laws. At present, we have seen few examples of large-scale deployment of agents, but this could change. Even for generative AI without agentic scaffolding, adoption in safety-critical areas is low, with simpler models preferred.

Adoption is also significant for how we regulate. A low adoption scenario may mean that we rarely have to consider agentic AI. A high adoption scenario means it will frequently be a factor in most of our work. Rates of adoption of agentic AI and its concentration across sectors will therefore be an important metric for us to follow.

This combination of two variables gives the following high-level scenarios for the near- to mid-term future of agentic AI:

Low agentic capability High agentic capability
High adoption Just good enough to be everywhere (scenario two) Ubiquitous agents (scenario four)
Low adoption Scarce, simple agents (scenario one)

Agents in waiting (scenario three)

Table 1 (above): Four scenarios for the future of agentic AI

These four scenarios formed the basis of our analysis of privacy and data protection considerations as they might present in future developments. The issues identified within those scenarios will help inform future policy thinking. They are presented here as a framework for us and various stakeholders to consider the safe development of innovative agentic AI applications.

The following scenarios aim to explore possible developments and uses of personal information by agentic AI. While the scenarios include high-level commentary on aspects of relevant data protection compliance, you should not interpret this as confirmation that the relevant processing is either desirable or legally compliant. This report does not provide ICO guidance.

Scenario one: Scarce, simple agents (low adoption, low agentic capability)

In this scenario, we see low adoption of agentic AI because of limited capacity. Agentic capabilities remain close to the current state, and the challenges of LLMs remain. Agentic AI has not lived up to the marketing hype about its potential. Software providers may offer agentic AI, but people and organisations don’t systematically use it. We would expect to see the following:

  • Growing suspicion of agentic AI’s accuracy and reliability means it is not widely used commercially and in government. Its use is mostly in low-risk situations where the limitations do not have a serious impact. AI agents remain as toys, experiments and demonstrations, with use mainly by early-adopter enterprises and research labs. Most organisations do not use agentic AI workflows or restructure their workflows around agents. There is very little use in high-stakes tasks or highly regulated domains.
  • Wider society has little exposure to agentic AI. Data protection impacts are not fundamentally different from those of generative AI. However, public awareness of how agentic AI works and where it is used is limited.
  • Without widespread use, there are low demands for standardisation, and AI agents are often incompatible. There is little infrastructure set up to support the use of AI agents (eg protocols for agent-to-agent communication, easy agent hosting platforms or data intended to be easily accessed by agents). Similarly, users may deploy experimental agentic AI with immature governance and security practices or little attention to data protection. This reinforces concerns about their use. There is a heavy use of human supervision (eg checking outputs or human approval being required for access to particular tools) as a safeguard against failures of agentic AI.
  • Where agents are used, they have limited access to additional tools and databases and limited ability to make decisions on their own. Human supervision, intervention and error correction are required to get practical and useful results from AI agents. Each deployment is relatively isolated, limiting the risks of large-scale failures or cascading problems.
  • There are still a smaller number of uses where agentic AI is used inappropriately, such as by giving it tasks it is not capable of performing reliably. For example, ‘shadow AI’, where employees use agentic AI without permission, remains a problem. This is the case particularly where the person using it sees a benefit but not the wider risks. This leads to a small number of high-profile failures, compounded by the risk-taking approach of the developers that do use low-capability AI in high-risk situations.
  • Our interaction with agentic AI occurs mainly through public complaints about bad experiences and data breaches. These cases involve organisations using low-capability agentic AI poorly or for tasks it is not suitable for, but the volume is not excessive.
  • Analysts expect a reduction in investment in agentic AI. Technology companies continue internal research and development to improve capabilities, reliability and robustness. They continue to create developer tools and application programming interfaces (APIs) to encourage experimentation, but they do not aggressively push agentic AI on mainstream users. Analysts expect a reduction in agentic AI hype. Reduced investment increases the cost of using agentic AI for the average person, further discouraging use. Open-source development continues for hobbyists and some specialised applications.

Scenario two: just good enough to be everywhere (high adoption, low agentic capability)

In this scenario, we see high adoption and use of agentic AI technologies despite the limited capabilities of agentic AI. Many of the problems with LLMs remain. This scenario may be driven by one or more of the following:

  • Aggressive marketing and release of agentic AI.
  • Low public and business awareness of flaws and friction around the implementation of agents.
  • Assessments by users that the limited capability is ‘good enough’.
  • Users ignoring agents’ limitations in the pressure to deploy agents.

Many of the harms in this scenario come from failures of agentic AI or ill-considered deployment. We would expect to see the following:

  • Regular inappropriate or ill-advised use of agentic AI – agentic AI used where it is not technically suitable, including in high-risk areas (eg financial services, law or healthcare).
  • High and low impact failures of AI agents occur regularly. Misinterpreted tasks, superficial approaches to tasks or failures on edge cases also cause frequent inconvenience.
  • Low-capability agentic tools widely embedded in services (eg shopping applications, social media, banking portals, government services and education platforms). Conversational agents become a more common interface to these services. However, they are limited and often make mistakes.
  • With large-scale adoption and agents mediating transactions, large volumes of personal information flow through agents and infrastructure providers.
  • Processing of personal information by low-reliability agentic AI may lead to data protection compliance issues. These could include:
    • data breaches caused by sharing the wrong information;
    • collecting and processing personal information without a legal basis; or
    • creating security vulnerabilities.
  • Agentic AI becomes an everyday part of our regulatory activity; however, its low capabilities lead to a high demand for intervention. Over time, we become very familiar with complaints related to data processing by agents.
  • Frequent errors result from hallucinating or limited agents. There is a high requirement for human-in-the-loop oversight and constrained autonomy (task-by-task authorisation and frequent checkpoints). Supervision costs for checking and monitoring AI agents are high (although irresponsible or ill-informed users might skip them).
  • Agentic AI providers use the requests, tasks, reasoning traces and outputs from these early agents to train more sophisticated models.
  • The volume of AI agents puts pressure on online systems (automated access to websites, mass data scraping, frequent and excessive API calls), including online public services. There are chaotic and unpredictable interactions between different agents. These significantly reduce transparency, making it harder to uphold data rights.
  • ‘Correcting the agent’ becomes a routine part of work and life, and this extends to data protection tasks. There is a high demand for redress and fixes for mistakes. Public dissatisfaction with agentic AI grows.
  • Agentic AI decreases information security by introducing new types of unmitigated vulnerabilities to information systems.
  • Large-scale generation of inaccurate personal information by agentic AI systems occurs.
  • There is a proliferation of ‘fake’ agentic AI (other technology marketed as ‘agentic’) and low-paid human workers standing in for or covering AI agents. Agents may look more transformative than they are in practice.
  • Demand for best practices for agentic AI deployment is high to minimise the harms.

Scenario three: Agents in waiting (low adoption, high agentic capability)

In this scenario, we see low adoption of agentic AI, despite increased agentic capability and overcoming many of the limitations that currently apply to LLMs and prototype agents. For example, agents include systems to mitigate LLM hallucinations and tools for meaningful governance and control of agent actions. Factors outside the technology itself could drive low uptake and use. These factors could include:

  • increasing costs;
  • time needed for changes in business models to fully take advantage of agent capabilities;
  • residual public distrust;
  • political barriers around access to agents or to computing infrastructure;
  • lack of governance or liability frameworks;
  • caution around sharing personal or confidential data; or
  • focus on specialised applications where agents work well.

Here, data protection and privacy harms largely come from misuse of effective agentic AI, rather than errors. For example, issues arise from unwarranted intrusion or loss of control of personal information, rather than processing of inaccurate personal information. In this scenario, we would expect to see the following:

  • A smaller number of agentic AI uses and a smaller demand for regulatory intervention. The number of agentic systems is also limited, reducing the impacts of complexity on transparency.
  • Organisations deploying agentic AI tend to be knowledgeable and familiar with the technology’s capabilities. This may mean the highest adoption in areas such as research labs and technology companies.
  • Promising pilots of agentic AI happen but are not rolled out more broadly. Pilots and research projects using agentic AI still involve high-sensitivity data and use agents for complex, novel and data-intensive tasks. Developers may prioritise proof-of-concept over privacy by design.
  • Organisations view the legal, compliance and reputational risks as outweighing any potential efficiency gains for wide deployment of agentic AI. This is particularly true in highly regulated sectors or high-stakes tasks.
  • Agentic AI implementations are bespoke and customised for specific uses and users. No two implementations are the same.
  • For the ICO, no two investigations or interventions around agentic AI are the same. When we have to investigate agentic AI uses, these are likely complex.
  • Low public familiarity with agentic AI and relatively low use in everyday life. People are more familiar with early, frequently faulty AI. Users are mostly specialists. People’s awareness of the capabilities of agentic AI is low. However, they may still be exposed to backend agentic AI by its specialist users.
  • Agentic AI failures are relatively rare. Failures with a public interest get high media attention, given the background scepticism about AI.
  • With the high capabilities of agents, employees may quietly and unofficially make use of agents without the permission, knowledge or support of their employers. This is a point of tension in these workplaces and causes data protection compliance errors and privacy harms.
  • Technical ecosystems and the surrounding infrastructure for agentic AI (integration standards, best practices, oversight mechanisms, tools) are limited.
  • Technology companies may increasingly shift effort from increasing raw capability of agentic AI to trust-building and risk mitigation efforts. Sophisticated agentic AI tools are used within these organisations, potentially giving them a strong advantage over competitors.

Scenario four: Ubiquitous agents (high adoption, high agentic capability)

In this scenario, we see high adoption of agentic AI that has increased in capability, approaching the marketing hype of its early years. This is not artificial general intelligence (AGI), and AI agents are not sentient. But they are powerful and capable tools, more reliable and robust than today. In this context, data protection harms arise from agents ‘working as intended’. They still have impacts on people, such as privacy-invasive agents, or from inexperienced or malicious users tasking agents to build software that violates data protection requirements. In this scenario, we would expect to see the following:

  • Large numbers of effective AI agents, widely deployed across many different parts of the economy and society. A wide choice of AI agents is available, with marketplaces for agents and ‘skills’ similar to app stores.
  • Agents regularly access and process personal information. Because adoption is widespread, agents mediate massive amounts of personal information (including special category data) and organisational information.
  • Agentic AI becomes an everyday part of our regulatory activity. Agentic capabilities integrate into many of the data processing activities we work with or oversee.
  • Similarly, our investigations become highly complex because of agentic AI. When mistakes occur, they are hard to spot among the volume of agentic activity.
  • Agent-to-agent communication, including the exchange of personal information, becomes common (eg your shopping agent negotiating with a retailer’s sales agent). This creates a complicated flow of personal information.
  • People lose privacy because they can easily instruct agents to search for and collate information from multiple sources. Generative and agentic AI tools make coding and building data processing systems easy and accessible to non-professionals.
  • New business models based around agentic AI emerge.
  • Other modes of user interfaces (such as voice) become more common. Interaction with software shifts from menus or dashboards to natural language. Productivity suites centre around asking an agent rather than manual workflows.
  • Agents technically capable of performing compliance and governance tasks emerge, including those associated with data protection or freedom of information.
  • Agent vendors are well-resourced and influential. Agent ecosystems emerge around the dominant players, potentially raising competition issues.