Preventing harm, promoting trust: our AI and biometrics strategy
Introduction
Artificial intelligence (AI)1 is fast becoming part of everyday life. It shapes how decisions are made, how services are delivered and, through biometric technologies, how people are identified. AI’s adaptability, scalability and ability to solve complex problems promises advances across science, public services and the economy.
Biometric technologies, powered by AI, can help organisations operate more efficiently and securely and support law enforcement in keeping our communities safe.
Realising these opportunities depends on people trusting that organisations are using these technologies responsibly and in compliance with the law. From a data protection perspective, people need to be able to trust that organisations using this technology:
- are transparent about the personal information they use;
- use this personal information fairly; and
- take appropriate care, putting in place governance and technical measures to protect people from harm.
A lack of transparency about how organisations use personal information risks undermining public trust in AI and biometric technologies. Without that trust, people are less likely to support or engage with AI-powered services. This creates a barrier to responsible adoption across the UK economy.
Public concerns are especially strong in high-impact cases:
- In policing, 54% of adults have some concerns that facial recognition technology would impact civil liberties and infringe on people’s right to privacy.2
- In recruitment, 64% believe employers will rely too heavily on AI, and 61% are concerned it will perform worse than human decision-makers when assessing individual circumstances.3
- In public services, concern about the use of AI to determine welfare eligibility has risen from 44% in 2022/23 to 59% in 2024/25.4
Public concern is not limited to outcomes. It extends to how organisations use personal information to build AI systems in the first place. These perceptions risk hampering the uptake of AI and biometric technologies. In 2024, just 8% of UK organisations reported using AI decision-making tools when processing personal information, and 7% reported using facial or biometric recognition. Both were up only marginally from the previous year.5
Our objective is to empower organisations to use these complex and evolving AI and biometric technologies in line with data protection law. This means people are protected and have increased trust and confidence in how organisations are using these technologies.
We will use our regulatory guidance and tools to signal clear expectations and provide certainty on how data protection law applies. This will help organisations across the public and private sectors ensure their governance and use of personal information results in responsible innovation, prevents harm and promotes trust.
However, we will not hesitate to use our formal powers to safeguard people’s rights if organisations are using personal information recklessly or seeking to avoid their responsibilities. By intervening proportionately, we will create a fairer playing field for compliant organisations and ensure robust protections for people.
This strategy sets out how we will:
- set clear expectations for responsible AI through a statutory code of practice for organisations developing or deploying AI and automated decision-making, to enable innovation while safeguarding privacy;
- secure public confidence in generative AI foundation models by working with developers to ensure they use people’s information responsibly and lawfully in training these models;
- ensure that automated decision-making (ADM) systems are governed and used in a way that is fair to people, focusing on how they are used in recruitment and in public services; and
- ensure the fair and proportionate use of facial recognition technology (FRT), working with law enforcement to ensure that the technology is effective and people’s rights are protected.
As these technologies evolve, new risks are emerging. AI systems that are increasingly capable of acting autonomously – so-called agentic AI –raise questions around accountability and redress. Meanwhile, some systems make speculative inferences about people’s intentions or emotions based on their physical or behavioural characteristics. These developments demand careful scrutiny. We will remain responsive to new issues that emerge and be transparent when our focus needs to shift.
This strategy supports our ICO25 strategic enduring objectives to:
- promote responsible innovation and sustainable economic growth; and
- safeguard and empower people, particularly those who need extra support to protect themselves.
It also reinforces our ongoing commitment to supporting economic growth by addressing risks that regulatory uncertainty is a barrier to organisations innovating with, and adopting, new technologies.
1 See Glossary for definition of this and other technical terms.
2 Understanding the UK public’s views and experiences of biometric technologies, ICO, 2025.
3 Understanding public attitudes to AI, Alan Turing Institute and Ada Lovelace Institute, 2023.
4 How do people feel about AI?, Ada Lovelace Institute and Alan Turing Institute, 2025.
5 The ICO’s Data Controller Study 2025 research will be published in the summer. For already published research see: Data controller study, ICO, 2024.