Information Commissioner: People must trust their information is protected in the age of AI
- Date 5 June 2025
- Type News
We are stepping up our supervision of AI and biometric technologies so people can trust that even the most innovative products and services are using their personal information responsibly.
Launched this morning (5 June), our new AI and biometrics strategy aims to ensure organisations are developing and deploying new technologies lawfully, supporting them to innovate and grow while protecting the public.
John Edwards, UK Information Commissioner, said:
"Our personal information powers the economy, bringing new opportunities for organisations to innovate with AI and biometric technologies. But to confidently engage with AI-powered products and services, people need to trust their personal information is in safe hands. It is our job as the regulator to scrutinise emerging technologies - agentic AI, for example - so we can make sure effective protections are in place, and personal information is used in ways that both drive innovation and earn people’s trust.”
New research on ADM and biometric technologies reveals that the public expect to understand exactly when and how AI powered systems affect them, and they are concerned about the consequences when these technologies go wrong – for example, if facial recognition technology (FRT) is used inaccurately, or a flawed automated decision impacts their job application. Over half (54%) of people surveyed shared concerns that the use of FRT by police would infringe on their right to privacy.
The ICO is focusing on uses of AI and biometrics that are prevalent today and may benefit people’s everyday lives yet cause the most concern and potential for harm if misused. It will provide organisations with certainty and the public with reassurance by:
- reviewing the use of automated decision making (ADM) systems by the recruitment industry and working with early adopters in central government such as the Department for Work and Pensions;
- conducting audits and producing guidance on the lawful, fair and proportionate use of facial recognition technology (FRT) by police forces;
- setting clear expectations to protect people’s personal information when used to train generative AI foundation models;
- developing a statutory code of practice for organisations developing or deploying AI responsibly to support innovation while safeguarding privacy; and
- scrutinising emerging AI risks and trends, such as the rise of agentic AI as systems becoming increasingly capable of acting autonomously.
The strategy was launched at our 40th anniversary event with the AI APPG in Parliament this morning which saw Parliamentarians, including Dawn Butler MP and the Lord Clement Jones CBE, industry and civil society gather to discuss the power of privacy in responsible AI use across the economy.
Speaking to a room of 100+ attendees this morning, John Edwards, Information Commissioner, said:
“The same data protection principles apply now as they always have – trust matters and it can only be built by organisations using people’s personal information responsibly. Public trust is not threatened by new technologies themselves, but by reckless applications of these technologies outside of the necessary guardrails. We are here, as we were 40 years ago, to make compliance easier and ensure those guardrails are in place.”
Dawn Butler MP, APPG AI Vice Chair said:
"Artificial intelligence is more than just a technology change; it is a change in society. It will increasingly change how we get health care, attend school, travel, and even experience democracy. But AI must work for everyone, not just a few people, to change things. And that involves putting fairness, openness, and inclusion into the underpinnings."
The Lord Clement-Jones CBE, APPG AI Co-Chair, said:
“The AI revolution must be founded on trust. Privacy, transparency, and accountability are not impediments to innovation; they constitute its foundation. AI is advancing rapidly, transitioning from generative models to autonomous systems. However, increased speed introduces complexity. Complexity entails risk. We must guarantee that innovation does not compromise public trust, individual rights, or democratic principles.”
The latest strategy builds on both our track record of regulating technology and our recent commitments to the Government on supporting economic growth. This includes publishing policy positions on generative AI and providing practical support through innovation services, as well as taking necessary action such as by intervening with Snap’s AI chatbot and ordering Serco Leisure to stop using biometric technology to monitor its employees.
Over the next year, we will consult on an update to our ADM and profiling guidance, develop a statutory code of practice on AI and ADM, and produce a horizon scanning report on the data protection implications of agentic AI.
Find out more about our 40 years history protecting people’s privacy during pivotal moments by visiting its digital exhibition ‘Our Lives, Our Privacy’.