Skip to main content

Introduction

Contents

Tech Futures is our technology foresight series. Each report explores an emerging technology, and we share our early thinking and understanding. The reports are not guidance or formal regulatory expectations but part of our process for responding to new technologies. In this report on agentic AI, we:

  • explain our understanding of agentic AI;
  • identify its potential developments and use cases;
  • highlight potential data protection issues that may emerge with increased adoption of agentic AI; and
  • present potential innovation opportunities that could support information rights.

We have developed the report based on desk research, stakeholder interviews and futures methodologies. Annex 1 provides more detail on the methodology.

Why agentic AI?

In June 2025, we published Preventing harm, promoting trust: our AI and biometrics strategy. In this strategy, we committed to:

  • work with industry to explore the data protection implications of agentic AI over the next two to five years; and
  • publish a Tech Futures report addressing issues such as accountability and redress over the longer term.

Our ambition is to encourage responsible development and use of agentic AI.

There is a long-term ambition in the field of AI for sophisticated AI assistants and automating complex activities. Recent technological advances open up the possibility of AI agents with increased capabilities.

Prominent technology companies are talking about their intention to develop or use AI agents. 1 Some consider that agentic AI offers the potential economic pay-off for investment in generative AI over recent years.2

As such, there have been a range of recent announcements, including:

  • updates to some major LLMs, including updates designed to enable more complex agents 3 or functions such as agentic web browsing; 4
  • release of dedicated tools for building agents using these models; 5
  • anticipated release of an experimental ‘agentic workspace’ that will enable developers to test using agents to complete tasks on their computer; 6and
  • release of an agentic AI platform by a major customer relations platform provider. 7

Reports indicate that the spread and deployment of AI is having a large impact on the UK’s economy. AI companies raised over £14 billion in revenue in 2023. 8 The number of AI companies is growing, with a 2024 study showing a 17% increase from the previous year. This growth could continue as agentic AI continues to develop and organisations adopt it more widely. The UK Government has suggested that agentic AI could help to revolutionise the way people interact with public services. 9

Some predict that the spread of AI and the eventual deployment of agentic AI could have a bigger impact on the world economy and finance than the internet. Others are more cautious, expressing concern that some commentators exaggerate agentic AI’s potential and capabilities.

We consider that the technical advances, combined with the market attention, require us to understand potential developments in this area. We must be able to separate hype from real potential.

The widespread use of AI agents could raise challenges for privacy and data protection, including accountability, transparency, data minimisation and purpose limitation. If organisations fail to demonstrate adequate compliance of their AI agents, this could risk undermining public trust. Without that trust, people may be less willing to support or work with AI-powered services. This creates a barrier to responsible adoption across the UK economy.

There are also potential opportunities for innovations in agentic AI that could support data protection, privacy and information rights. For example, organisations can use privacy by design in creating agentic systems or agents could support or automate the exercising of information rights. As supporting responsible innovation is part of our contribution to economic growth, we would like to encourage these opportunities by drawing attention to them.

This report accompanies internal work we are doing to understand how agentic AI is developed and used in the real world, as we develop a statutory code of practice on AI and automated decision-making. It builds upon our previous Tech Horizons report on Personalised AI and our published guidance on AI and data protection. Additionally, we have published the outcomes of our consultation on generative AI, which sets out our analysis of how specific areas of data protection law apply to generative AI systems.

This report explores different regulatory scenarios so we can be ready for the various ways agents might be adopted.

What are agentic AI and AI agents?

Definitions of agentic AI and AI agents vary. Organisations use a variety of terms to market and promote these technologies, and these can differ from definitions in scientific literature. In this section, we set out how we understand agentic AI for the purposes and scope of this report.

In computing, an agent is software or a system that can carry out processes or tasks with varying levels of sophistication and automation. One relevant example is automatic stock management and ordering. Previously, agents have typically been specialised and designed to perform specific tasks within pre-set limits.

Recently, advances and new approaches in AI have increased the potential autonomy and range of tasks that people may give AI agents. This is leading to novel applications and tools with new capabilities.

Large language models (LLMs) are statistical models trained on vast amounts of language. Along with other types of foundation models, they are one of the technologies behind the generative AI tools released in recent years. When LLMs or foundation models are integrated (‘scaffolded’) with other tools, including databases, memory, computer operating systems and ways of interacting with the world, they create what industry is calling agentic AI. This specific form might also be called ‘agentic LLMs’.

An agentic system is any computing system that makes use of this agentic capability. The agentic nature of a foundational model can vary significantly depending on which tools it is scaffolded with.

This approach has fundamental differences from previous agent systems. LLMs and other highly capable general-purpose AI models can enable agentic systems to:

  • work with contextual information;
  • take instructions and provide information in natural language;
  • use knowledge embedded in their training data;
  • handle various types of information; and, potentially,
  • perform iterative ‘reasoning’.

They can function in a wider range of circumstances and potentially allow for long-term planning, goal-oriented behaviour and adaptive decision-making.

Importantly, developers are designing modern AI agents that can create and execute context-specific plans in more variable environments, with less human direction. This means that, while traditional software typically follows a fixed way to solve problems, agentic AI might generate different ways to approach a problem or achieve a goal. With appropriate scaffolding, these systems could have the tools to put those plans into action and affect the real world. However, because they build on LLMs, some of the negative characteristic features of LLMs may be present, such as:

  • making up facts (‘hallucinations’);
  • providing confidently expressed but incorrect answers; or
  • expressing bias embedded in their training data.

The novel nature of agentic systems may result in unanticipated or unprecedented actions as they work towards completing their goals. Understanding the reasoning behind their actions may require an intensive investigation of logs and error conditions. This particularly applies where an AI may have hallucinated to reach a conclusion or given wrong answers presented as fact.

Agentic AI systems are likely to display, to some extent, the following four capabilities:

  • Perception – being able to work with a wide range of potential inputs. This could include natural language and unstructured sources created for different purposes that were not designed to be machine-readable.
  • Planning or reasoning-like actions – for example, generating plans, dividing tasks into sub-tasks and checking for errors. 10
  • Action – including accessing tools, interacting with people or other agents and generating and running code.
  • Learning and memory – adaptive decision-making, incorporating error corrections into future plans, learning the preferences of their users and learning from feedback.

Agentic AI systems may take different forms depending on their intended use. ‘AI assistants’ are often mentioned as a potential use of agentic AI. However, beyond using agentic AI as a standalone agent, organisations can also use it:

  • in background processes;
  • as part of widespread infrastructure;
  • in operating systems; or
  • as part of other software.

In addition, several agents can be combined to form ‘multi-agent systems’.

Researchers have started to evaluate agentic AI capabilities on a range of measures: 11

  • Autonomy – How independently the AI system can operate, or how little human input is needed either to oversee a task or make it work correctly.
  • Efficacy – The ability to interact with and impact the world.
  • Goal complexity – The complexity of goals that the agent can handle. These can range from simple goals with simple solutions to complex competing goals that require a series of subtasks to achieve. A component of this is long-term planning – the extent to which the agent can work on goals over longer periods of time.
  • Generality – The ability of an agent to work across different roles, contexts and types of tasks.
  • Underspecification – The extent to which an agent can achieve a goal without someone specifying how it should do so. 12

In this report, our focus is on the emergence of AI systems that have limits but can:

  • autonomously pursue goals;
  • adapt to new situations and contexts; and
  • exhibit some reasoning-like capacities.

We are not concerned with agents that are far more limited, task-specific and cannot learn and implement new approaches. Agents such as these are already on the market and outside the scope of this report. For example, we have observed the term ‘AI agents’ used to describe chatbots that are based on LLMs but not integrated with external tools.

We are also not addressing the potential emergence of artificial general intelligence (AGI), which refers to hypothetical AI systems that can match or exceed human-level performance across all tasks, or any associated existential risks.

Potential use cases

Our research and engagement with stakeholders raised several situations where organisations already use AI agents. Current day and near-future uses include:

Research – Researchers using agentic systems to assemble reports or summaries based on information sources they have access to. Examples include producing sales reports and analysis or helping with scientific research.

Coding – Programmers are already using agents to support coding. They can develop new scripts or check the output of human coders in real-time.

Planning, organising and executing transactions – Current and near-term emerging functions include basic planning tasks, such as:

  • booking a meeting while taking into account available meeting times;
  • planning and booking travel by searching the web; and
  • asking a user to pick from recommended purchase options.

We also found current examples of agents able to automate data entry or complete more advanced customer service tasks.

More advanced, near-term versions might:

  • manage your diary engagements more proactively; and
  • plan and book social activities from prompts you give them.

Further into the future, agents may interact autonomously with various third-party agents. This may lead to use cases based on marketplaces or ecosystems of multiple agents.

Potential use cases further along our timescale are less clear. However, some AI developers are encouraging and supporting experimentation with novel use cases in the workplace.

Some stakeholders suggested that, in future, agentic AI would be able to do ‘anything’ and people and organisations could widely deploy it throughout systems and workflows. Other stakeholders suggested that UK deployments in high-risk contexts where errors could incur legal liability (such as law, finance or medicine) may happen more slowly.

Stakeholders saw potential uses across many industries in the future as agent capabilities and user comfort increase. We could see wider integration with different tools, greater interoperability between agents and improvements in early use cases. Examples could include the following:

Agentic commerce

We are already seeing early agentic commerce agents. 13 For example, early systems use agents to present a customer with a pre-populated list of retailers for something they want to buy. They then search the web for options, taking into account budget and personal preferences set out in a prompt. Once they’ve made a selection, and with appropriate permissions, the agents could arrange payment and coordinate with the customer’s diary to arrange delivery.

Organisations are already integrating more advanced agents into customer service roles. For example, an agent may access multiple different customer management systems to check if a received complaint is valid. In future, they could automate tasks (such as refunding transactions, filling out forms or booking appointments). Organisations could also use agents to automate promotions by generating hyper-personalised campaigns and interacting with customers as needed across multiple communication platforms. 14

It's possible that long-term, agentic commerce could become standard, with customers and companies delegating more transactions to their agents to handle directly. We may see customers’ personal agents anticipating shopping needs and making proactive purchases. These could be based on broad objectives, learned and defined preferences or behaviours, or knowledge of upcoming plans, rather than specific prompts.15

For example, if someone were planning a large purchase, their shopping agent may connect with their bank account or financial agent to check if the purchase is within their budget for the month. The shopping agent may conduct market research and schedule large purchases to get the most benefit from sales and discounts or negotiate a price directly with a seller. The financial agent may then adjust future planned spending to allow for the purchase and provide details of how this affects other spending plans. This could potentially extend to agents seeking out tailored financing options to present to the shopper for agreement.

Workplace applications

Agentic systems could improve individual and organisation functions. This might include using AI assistance to create smaller, more agile teams – or even having some functions or specialisms entirely handled by an agent.

An example of such an ‘agentic workflow’ could be an agentic IT assistant. If an employee contacts the agent with an IT issue, the agent could ask clarifying questions, take steps to diagnose and resolve the IT issue, and learn from the result. 16 This approach could also apply to other office functions, such as recruitment. Early agents could automatically generate a job description and filter applicants, conduct chatbot-based first-round interviews, book additional interviews and provide feedback. 17

Or in the case of insurance, agentic AI could automate data entry, review unstructured claim records, flag potential evidence of fraud and generate recommendations. 18 One stakeholder highlighted the potential for workplaces to create ‘digital twins’ of a person’s job. We are already seeing early examples of this. 19

Government services

The UK Government is exploring the possibilities of deploying agents in some government services by 2027. Early experiments will focus on using agents to provide tailored employment guidance and support. The future intention is to experiment with agentic systems that could automate some government ‘life admin’. This could include, for instance, automatically updating addresses and electoral registration or signing up for a new GP when a person moves house. 20

Other examples could include helping social services users complete administrative tasks and access online services. This would give social services professionals more face-to-face time with service users. 21

Automated cybersecurity applications

People and organisations are likely to use agents in future to both secure systems and attack them. Increasingly autonomous agents could enable wide-scale attacks on systems with little human involvement. 22 In the near term, they could scale up human attacker capabilities. People could task agentic systems to probe, examine and create custom attacks on remote networks.

However, agentic systems also offer opportunities for more advanced automated defence systems by protecting networks from cyber threats. Agentic systems could complement existing methods of detecting malicious activity by:

    • proactively identifying vulnerabilities before they are exploited; or
    • acting reactively to safeguard networks.

In both circumstances, they could either alert a human or potentially take steps to secure the network.

Integrated personal assistants

The ultimate commercial vision for a personal assistant is a highly personalised agent that can integrate with multiple systems and help a person manage many aspects of their life. These agentic systems could be embedded in personal devices such as phones. They could also act as a next-generation interface, seamlessly operating across every device and accessed by various methods. We may also see more specialised personal assistants emerge, such as an automated financial assistant that could manage and improve a person’s day-to-day finances.

Medical sector

In the longer term, examples might include teams of specialised agents supporting medical diagnosis or helping to create treatment plans. In a care setting, agents may interact with caregivers from many different devices or interfaces to support them with a range of tasks.

Technical developments

Stakeholders emphasised that the pace of change means it is difficult to predict what agentic AI could look like beyond the next two years. Most stakeholders agreed, however, that we would probably see agents used for a wider range of purposes in a wider range of sectors within this timeframe.

There were conflicting views about how the capabilities of agents could evolve. Generally, stakeholders anticipated that agentic technologies will continue to improve as they are used and tested in the real world. However, they suggested the rate of improvement in LLM capabilities may slow or even stop in the short to medium term.

Multiple stakeholders believed that we could see the following technical developments:

  • Truly multimodal agents: Many current agentic systems focus on natural language and text-based inputs . For example, a written conversation with a customer service chatbot, or typing a prompt for a research agent. Increasingly, research is focusing on allowing inputs and responses in different ways (eg voice, images and touch).
  • Increasing agent autonomy, and multi-agent systems moving from research to real-world applications: Most stakeholders highlighted that they expect significant developments in multi-agent systems and agent-to-agent communication in future. We could see multiple agents deployed to complete separate tasks to meet an overarching goal. Examples might include:
    • research agents in a team working on different parts of a project;
    • agents cooperating with each other to complete a complex task; or
    • agents managing a complex Internet of Things (IoT) system, such as a building.

    We could also see personal agents communicating directly with each other, or with an organisation’s agents, on a person’s behalf. This would require improved interoperability between agents, which is an ongoing area of research.

  • Agentic AI embedded into a wider range of software and devices: Stakeholders suggested that in future we could see agentic AI embedded into a wide range of emerging technologies. These might include augmented reality and IoT devices, connected and autonomous vehicles and, even further into the future, robotics.

    Some stakeholders highlighted developments in ‘agentic’ operating systems. They suggested that future agentic software could be accessible in many different forms and on many types of devices. They also mentioned that people may interact with future agents in their environment (eg by voice via IoT devices), rather than through a computer or phone screen.

    This is consistent with developments we explored in our connected transport and next-generation IoT Tech Horizons report chapters. In July 2025, we published and consulted on our draft IoT guidance.

  • Greater personalisation of agents: Stakeholders agreed that in the future, agents will increasingly use profiling and have a greater understanding of, and ability to adapt to, a person’s environment, preferences and behaviours. An agent could draw on their interpretation of a person’s prompts and interactions with the model, as well as information gathered by other connected technologies or applications. For example, a more personalised agent could interpret a person’s unique learning style and adapt its content and responses for educational tasks. It could also adapt content in gaming, social media or advertising, or (in the case of a proactive personal assistant) help someone manage aspects of their life.

    This is a development we have explored further in our Personalised AI Tech Horizons report chapter. We have also published profiling guidance.

We could also see the following developments:

  • Emergence of ‘self-improving’ agents: Agents capable of rewriting their own prompts to improve performance, or to learn from information they find autonomously as well as the information initially used to train them. Self-improving agents could use this additional information to improve their own models and decision-making. 23
  • Ongoing research and development in control mechanisms, safety and privacy-focused technical features: Some stakeholders suggested we could see ongoing research into improvements in control mechanisms, AI safety and privacy-focused measures. For example, this could include research into:
    • real-time ethical autonomy;
    • improving and implementing techniques to mitigate errors; and
    • improving transparency.
    Further strategies to validate outputs may emerge as the autonomy and complexity of agent workflows increase. Some measures may be privacy-focused – for example, future improvements in privacy-enhancing technologies (PETs) designed for agents. Others speculated that we may see ‘privacy-focused’ on-device agents emerge.

In parallel to these developments are approaches that attempt to address some of the limitations of LLMs. 24 Some developers argue that these limitations restrict the development of truly agentic systems. 25 At a high level, agents may appear proactive, able to anticipate and able to learn. However, they remain embedded in approaches that can struggle to turn text-based input and output into action in the physical world.

  • Large action models (LAMs) offer the means to potentially deliver multi-modal outputs beyond audio, visual and text-based prompts. For example, this may mean the movement or action of a device or robot. LAMs may achieve this through systems such as:
    • action tokenisation (breaking physical actions into discrete sections); and
    • autoregressive action generation (where previous actions are used to predict future actions).
    Much like LLMs, some see LAMs as benefiting from training on very large data sets. Here, LAMs raise similar data protection and privacy concerns as LLM-based agents.
  • Small language/domain-specific models (SLMs and DSMs) are approaches that seek to address both the size of data sets and the risk of hallucinations within highly technical and specialised sectors (eg law and medicine). 26 These models use smaller, more specialist training sets that reflect technical language in areas with high demands for accuracy and precision of outputs. Their size may also allow SLMs to run on local devices, improving privacy-focused approaches.

While these may come to play an increased role in certain sectors, smaller models are still large compared to early-stage general models such as GPT-5. Furthermore, complex goals may require multiple agentic systems acting together, rather than using a single system. This raises its own concerns. As a result, issues created by probabilistic approaches to accuracy remain important, as we explore later in the report.

To deliver improved services, all fundamental processing that underpins agentic approaches (whether LLMs, SLMs or LAMs) must improve its interpretation of, and reaction to, context. Traditional methods of providing relatively scripted responses limit the agent’s ability to interact creatively with ambiguous and uncertain situations. Research into neuro-symbolic AI seeks to address this.

Neuro-symbolic AI is defined as a broad subsection of AI that seeks to create systems that learn to interact with the world around them without the need for strict rules or interpretation. 27 Some researchers are developing models to become more ‘human-like’ in their processes. They do this by combining:

  • neuro-connectionist approaches (which seek to mimic mental processes in humans); and
  • a focus on symbolic interpretation (assigning meaning to objects, processes and actions). 28

This approach emphasises the development of self-awareness in systems to allow for better planning, improved adaptability and greater transparency. This would theoretically develop agents that could respond in a far faster and more naturalistic manner to unplanned events occurring around them. There has been a significant growth of interest in this area of research in recent years, but the technology has not yet been used in commercial systems.

While neuro-symbolic approaches offer gains in accuracy and independence, they are also likely to raise heightened concerns around:

  • a growing demand for contextual and environmental information to inform the development and decision-making of neuro-symbolic processes; and
  • a lack of transparency about why agents that interpret situations creatively make the decisions they do.

Both issues are examined in greater depth later in this report, in the data protection issues section.

We do not attempt to provide a judgement about whether LLMs and the systems they support can reason in the creation of their outputs, or if they produce outputs that predict what reasoning might look like. This report operates on the premise that agentic AI outputs can be presented as reasoned and may often be interpreted and used as such by organisations and people. Whether or not this use is appropriate, it raises considerations for privacy and data protection.

1 OpenAI article on introducing Operator agent; Anthropic article on developing a computer use model; Google article on Gemini AI

2 Financial Times article on Open AI hopes for AI agents; MIT Technology Review on helpful AI Agents; Axios article on uses of generative AI

3 Anthropic news release announcing Claude Opus 4.1 (August 2025); Anthropic news release introducing Claude Sonnet 4.5 (September 2025)

4 Google news release introducing the Gemini 2.5 Computer Use model (October 2025)

5 Anthropic news release on building agents with the Claude Agent SDK (September 2025); OpenAI news release introducing AgentKit (October 2025)

6 Microsoft Support article on experimental features; Windows Central article ‘Microsoft just revealed how Windows 11 is evolving into an agentic OS’ (November 2025)

7 Salesforce news release announcing “the Agentic Enterprise” (October 2025)

8 GOV.UK Artificial Intelligence sector study 2023

9 GOV.UK article on potential uses for AI Agents

10 There is an ongoing debate about the extent to which this is ‘reasoning’, and we do not adopt a position on this debate.

11 Arxiv article on characterizing AI Agents for alignment and governance; Arxiv article on why fully autonomous AI Agents should not be developed

12 Arxiv article on how underspecification presents challenges for credibility in modern machine learning

13 OpenAI article on "Buy it in chatGPT"; Forbes article on AI shopping agents

14 Meta AI tools for businesses marketing on social media; Salesforce article on agentic commerce

15 PwC article on the rise of agentic commerce on buying behaviour; Salesforce article on agentic commerce

16 IBM article on agentic workflows

17 Salesforce article on Agentic AI; Recruiter article on AI Agents

18 Simplifai article on AI and insurance

19 See, for example, Personal AI portfolio of AI personas

20 GOV.UK article on potential uses for AI Agents

21 Opening Remarks by Minister Josephine Teo (Government of Singapore) at Google Cloud's AI Asia Event

22 There are early examples of this. For example, in November 2025, Anthropic released a statement about their discovery of a threat actor using agentic capabilities to execute a cyber attack: Anthropic article ‘Disrupting the first reported AI-orchestrated cyber espionage campaign

23 Arxiv article on Self-Adapting LLMs; Google DeepMind article on Alpha Evolve

24 IBM article on Agentic AI

25 Arxiv article on Large Action Models

26 IBM article on Small Language Models

27 Arxiv article on Neurosymbolic AI

28 University of Edinburgh article on Neurosymbolic AI