Skip to main content

John Edwards speaks at ICO’s event with the AI APPG in Parliament

  • Date 5 June 2025
  • Type Speech

Information Commissioner John Edwards’ speech at the ICO Parliamentary event with AI APPG, delivered on Thursday 5 June

Check against delivery

Good morning everyone and welcome to our event on how privacy can power the AI revolution. Thank you to our hosts, the AI APPG - I am pleased to see so many people from a wide range of sectors joining us. 

If you are here today, you will be invested in – and likely excited by - the transformative opportunities AI presents. AI is no longer the prerogative of Silicon Valley tech giants or multi-national corporations with the budget to access the cutting-edge of technology. The whole economy has woken up to the power of harnessing AI to drive responsible innovation – which is why it is really promising to see such engagement today in the role privacy plays in supporting AI adoption. 

These opportunities must be built on a foundation of public trust. People need to trust that organisations are using their personal information responsibly so they can be empowered to engage with AI-driven products and services, and fuel further growth and investment with their data. 

Since the ICO’s inception in 1984, new and novel technologies have transformed how we think about our privacy. From mobile phones and smart devices to AI chatbots and social media, we are all sharing more personal information than ever before and in ways we would never have imagined 40 years ago. 

But the same data protection principles apply now as they always have – trust matters and it can only be built by organisations using people’s personal information responsibly. Public trust is not threatened by new technologies themselves, but by reckless applications of these technologies outside of the necessary guardrails. We are here, as we were 40 years ago, to make compliance easier and ensure those guardrails are in place so organisations can innovate and invest in AI while keeping people safe. 

Our focus on AI and biometrics is not new – from stepping in when pupils were using their face to pay for school dinners to investigating the Met Police’s Gangs Matrix that jeopardised people’s trust, our history has prepared us to take emerging technologies in our stride. 

We have moved swiftly to provide clarity on new areas such as generative AI and we have taken action where needed – such as intervening with Snap’s AI chatbot and ordering Serco Leisure to stop using biometric technology to monitor its employees. As we said in our recent response to Government, we are supporting businesses of all sizes with our innovation advice services.  

Today, we are launching our AI and Biometrics strategy, setting the direction of travel for our work over the next year. We will be ramping up our scrutiny across the AI ecosystem - focusing particularly on areas where there is the potential for public benefit but we know there are concerns and a real risk of harm. Our research shows that people expect to understand when and how AI systems affect them, and they are worried about the consequences when these technologies go wrong – such as being incorrectly identified by facial recognition, or losing a job opportunity through erroneous ADM. 

Our strategy – which you can read via the QR code – covers how we will: develop a statutory code of practice for organisations developing or deploying AI, set clear expectations for the use of automated decision-making systems (starting in recruitment and public services), ensure AI foundation models are being developed lawfully and ensure police are using facial recognition technology fairly and proportionately. 

Many AI tools are still in the early stages of maturity – and while they may seem simple to implement, they can introduce novel risks when used to address complex social challenges. I urge organisations to take appropriate care and use our guardrails – such as guidance, innovation services and DPIAs – to develop and deploy this technology responsibly and on a foundation of trust, protecting your customers and your reputation. 

Given AI makes the headlines every week, we must remain agile - ready to support Government with their AI agenda and pivot swiftly between trends and challenges where careful scrutiny is required. You may see our focus shift in response to emerging risks – for example, we recently asked Meta to review its plans to use Facebook and Instagram user data to train generative AI. 

Agentic AI is the next chapter of the AI evolution, with systems becoming increasingly capable of acting autonomously. Whereas generative AI might be able to write your shopping list, an agentic AI might be able to access an online shop and use your ID and payment details to place an order. We will be exploring the data protection implications of agentic AI so stay tuned for an upcoming Tech Futures report – as well as consultations in the Autumn as we put our strategy into action.  

Innovation and growth go hand in hand with keeping people’s data safe. But we need the support of everyone in this room to fully unlock the UK’s potential as a privacy-respectful place to develop and use AI products and services. I hope our event today inspires you and I’m looking forward to listening to our panellists in a few moments, who will bring their unique perspectives about the power of privacy when it comes to trust in AI.