Skip to main content

Our plan of action

Contents

Over the next year (2025/26), we will take targeted action across the types of cases we have prioritised. We will ensure that organisations can develop and deploy AI and biometric technologies with confidence and that people are safeguarded from harm.

Give organisations certainty on how they can use AI and ADM responsibly under data protection law

We will:

  • consult on an update to our ADM and profiling guidance by autumn 2025, reflecting proposed reforms in the Data (Use and Access) Bill; and
  • develop a  statutory code of practice on AI and ADM, providing clear and practical guidance on transparency and explainability, bias and discrimination and rights and redress, so organisations have certainty on how to deploy AI in ways that uphold people’s rights and build public confidence.

Ensure high standards of automated decision-making in central government, ensuring decisions that affect people are fair and accountable

We will:

  • learn from early adopters of ADM, such as the Department for Work and Pensions and others, and communicate our findings across central government to support the scaling of responsible use; and
  • set out regulatory expectations, securing assurance that departments are using ADM responsibly and with appropriate safeguards. 

Set clear expectations for the responsible use of automated decision-making in recruitment

We will:

  • scrutinise the use of ADM in recruitment by major employers and recruitment platforms, identifying risks related to transparency, discrimination and redress; and
  • publish findings and regulatory expectations, holding employers to account if they fail to respect people’s information rights.

Scrutinise foundation model developers to ensure they are protecting people’s information and preventing harm

We will:

  • secure assurances from developers that personal information used in model training is safeguarded, with appropriate controls to prevent misuse or reproduction of sensitive information, including child sexual abuse material; and
  • set clear regulatory expectations, where needed, to strengthen compliance (including for the use of special category data), and take action if unlawful model training creates risks or harm.

Support and ensure the proportionate and rights-respecting use of facial recognition technology by the police

We will:

  • publish guidance clarifying how police forces can govern and use FRT in line with data protection law, with advice on organisational and technical measures to minimise risks;
  • audit police forces using FRT and publish our findings, securing assurance that deployments are well-governed and people’s rights are protected; and
  • provide expert advice to government on proposed changes to the law, ensuring any future use of FRT remains proportionate and publicly trusted. 

Anticipate and act on emerging AI risks

We will:

  • engage with industry to assess the data protection implications of agentic AI, publishing a Tech Futures report examining issues such as accountability and redress, before consulting on emerging data protection challenges; and
  • set a high bar for the lawful use of AI systems that infer subjective traits, intentions or emotions based on physical or behavioural characteristics, conducting ongoing surveillance of use cases and taking action where such systems cause harm or infringe people’s rights.