Annex I: Methodology
In this annex, we provide additional information about the methodology we followed to produce this report.
Futurecast
We used a bespoke version of the UN Global Pulse Futurecast to project plausible futures and consider how the increased use of agentic systems might impact data protection for stakeholders. Participants at two internal workshops were assigned the roles of interested parties who would be affected by, or can shape the development of, agentic AI. As the future timeline developed, we generated events which would impact agentic AI.
We used the PESTLE framework as the basis for analysis and discussion. The framework encourages participants to identify sets of political, economic, social, technological, legal and environmental factors impacting the future. We repeated the simulation with different attendees to identify any patterns or recurring themes.
Some repeating themes emerged, including issues around control, transparency and economic impacts. Participants also raised considerations around regulation and how society might change as a result of widespread use of agentic systems.
Stakeholder engagement
Following our desk research phase, we progressed to engaging with stakeholders whom we expected to:
- influence the use of agentic AI in future;
- help shape the technological development of it; or
- have an interest in the issues (including social, privacy or security) arising from its use.
We conducted interviews with academics, interest groups and industry. These interviews helped inform our thinking on developing trends, potential use cases and critical uncertainties. They also allowed us to validate some of the conclusions drawn from the research phase and consider practicalities that organisations and people are addressing in the real world.
We asked stakeholders questions focusing on:
- technical development, use cases and the state of the art;
- best and worst case outcomes for agentic AI;
- information rights issues;
- control and governance mechanisms and risk mitigation for agentic AI;
- development of the market for agentic AI; and
- crucial actions for developing agentic AI to achieve positive outcomes.
We collated and consolidated stakeholder answers as another input for this paper and for developing our futures scenarios.
Scenario planning
For this topic, we adopted a scenario planning methodology to manage the uncertainty around agentic AI. Scenario planning is a foresight methodology to support organisational planning. The aim is to identify a range of possible futures, describe them and support flexible planning that will work across multiple futures. This allows teams to identify key uncertainties and create indicators to support decision makers in understanding the scenario they are heading towards.
Rather than trying to predict the future in the face of high uncertainty, the method creates multiple scenarios. This process helped us to be explicit about our assumptions. The method can scale up or down depending upon resources, but is best when iterative, expert-led and stakeholder-centred.
An early step is identifying potential variables for a set of scenarios. These should be truly variable (with both ends being plausible and clear). They should also be significant (relevant to important aspects of the technology and the role of the regulator, and likely to have a considerable impact). In addition, the variables should be independent, avoiding strong correlations between them. When combined, the variables should result in four or so distinct, plausible and robust scenarios. It is always necessary to select variables, so the output of the scenarios needs to do something useful for the decision maker.
Based on our research and stakeholder engagement, we compiled a longlist of potential variables that could be pivotal for the development of agentic AI. These included the following:
- Various ways of expressing or evaluating the capabilities of agentic AI:
- Autonomy or agency of agents (from narrow, task-specific tools to being capable of fully autonomous decision-making)
- Generality of agentic AI (from dedicated agents for single tasks to general agents potentially adaptable to many or any tasks)
- Ability of agentic AI to handle underspecification
- Transparency and explainability of agentic AI
- Goal complexity that AI agents can handle
- Controllability or alignment of agentic AI with values or laws
- Reliability of agentic systems
- Efficacy of agentic systems
- Security of agentic systems
- Extent of the integration of privacy-enhancing technologies into agents
- Extent to which limitations of LLMs have been overcome
- Measures of the diffusion or adoption of agentic AI:
- Rate or extent of adoption/uptake of agentic AI
- Accessibility of agentic AI to business or the general public
- Rate of technological advancements in agentic AI
- Dominant methods of use of agentic AI:
- Single agent for everything, multiple agents or integration of agentic AI into existing systems
- Automated problem solvers to teamwork assistants
- Market concentration or diversity:
- Open-source agentic models and tools or proprietary
- Distributed or concentrated agentic innovation ecosystems
- Social or political variables:
- Public trust in AI or in agentic AI specifically
- Global regulatory alignment or fragmentation
- Incidence of AI-related crises or scandals
- Existence or strength of ethical norms around agentic AI use
- Presence or absence of governance structures and mechanisms
- Extent of liability mechanisms
- Levels of AI literacy
- Levels of AI governance
Steps taken to build and validate scenarios
We prioritised this long list based on:
- Impact – how significantly the variable could shape the future
- Uncertainty – how unpredictable the variable is
- Relevance – how closely it relates to our focal areas (in our case, information rights and our remit)
From this evaluation, participants in our internal workshop reached consensus around three top potential variables:
- AI agent adoption
- Market diversity
- Capabilities of AI agents
We then developed sets of four scenarios for each of the potential combinations of these three candidate variables. This allowed us to check for plausibility, to avoid overly correlated variables and to better refine our understanding of the extent of the variables.
We shared these mock-ups with key stakeholders internally and used them to frame questions to external stakeholders. We compared the sets of scenarios against each other to assess the type and quality of the insights they were creating.
We chose the extent of adoption of agentic AI and the general capabilities of agentic AI. We then returned to the findings from our research, stakeholder engagement and the Futurecast work to further populate and detail the four selected scenarios that we set out in this report.
To develop the scenarios in detail, we used a consistent set of indicators. This would help us to be consistent and make sure that each scenario was talking about similar things. This included asking the following questions, given the parameters of the scenario:
- What are key players doing in these scenarios?
- What are the potential data protection and privacy harms, and are there particular ways that these harms would manifest?
- What are the likely areas for innovation in agentic AI, and what is driving this?
- Where are the sources of economic growth?
- What are the business models and distribution channels around agentic AI?
- How is agentic AI being marketed or reported on?
- Are the public using agentic AI or agents?
- What do agentic ecosystems look like?
- What tasks do organisations and people use agentic AI for (either successfully or unsuccessfully)?
- What do agentic user interfaces look like?
- What are the opportunities for privacy by design?
- What cases and complaints might the ICO be seeing?