The ICO exists to empower you through information.

Artificial intelligence (AI) is a priority area for the ICO due to the potential to pose a high risk to individuals and their rights and freedoms. We believe public trust is paramount for the safe adoption of AI across sectors and as a regulator we will continue to respond to the demand for more work in this space.

Our current areas of focus are:

  • fairness in AI;
  • dark patterns;
  • AI-as-a-service;
  • AI and recommender systems;
  • biometric data and biometric technologies; and
  • privacy and confidentiality in explainable AI.

This page details our work on AI to date including:

If you would like to contact us, please email us.

Guidance and practical resources

Guidance

Guidance on AI and data protection – this guidance covers best practice for data protection compliant AI and how data protection law applies to AI systems that process personal data.

Explaining decisions made with AI – this co-badged guidance by the ICO and The Alan Turing Institute (The Turing) aims to give organisations practical advice to help explain the processes, services and decisions delivered or assisted by AI, to the individuals affected by them.

How to use AI and personal data appropriately and lawfully - these top tips provide a brief introduction to some of the most important considerations organisations should make when using AI and personal data. We also answer some of the questions that we frequently receive about AI and data protection.

Tools

AI and data protection risk toolkit – our AI toolkit is designed to provide further practical support to organisations assessing the risks to individual rights and freedoms caused by their own AI systems.

Toolkit for organisations considering using data analytics – this toolkit helps organisations recognise some of the central risks to the rights and freedoms of individuals created by the use of data analytics.

Thought pieces

Opinions

Age Assurance for the Children’s Code (2021) - this Opinion is for providers of Information Society Services (ISS) likely to be accessed by children, and providers of age assurance products, services and applications that those ISS may use to conform with the code. Section 4.2 looks at age assurance and AI.

The use of live facial recognition technology by law enforcement in public places (2019) –this Opinion looks at how the ICO regulates the processing of personal data when law enforcement organisations use facial recognition technology in public spaces. It aims to guide law enforcement through all the stages of the processing.

The use of live facial recognition technology in public places (2021) – this Opinion looks at the use of live facial recognition (FRT) outside of law enforcement. It explains how data protection law applies to FRT and the assessments the ICO expects organisation to be do made before using it.

Consultation responses

Response to the consultation series on generative AI - As part of the ICO’s work on artificial intelligence (AI) regulation, we have been quick to respond to emerging developments in generative AI, engaging with generative AI developers, adopters and affected stakeholders. In April 2023, we set out questions that developers and deployers needed to ask. In January 2024, we launched our five-part generative AI consultation series, which this consultation response summarises.

Department for Digital, Culture, Media & Sport consultation: "Data: a new direction" (2021) – the ICO’s response to the consultation on future data protection reform. Section 1.4 provides our response to reforms relating to AI and ML.

Proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence act) and amending certain union legislative acts (2021) – the ICO’s response to the AI act.

Reports

ICO reports

Project ExplAIn interim report (2019) – a collaboration between the ICO and The Alan Turing Institute (The Turing). As part of this project, the ICO and The Turing conducted research to understand how organisations could explain AI decisions to the individuals affected.

Big data, artificial intelligence, machine learning and data protection (2017) – this discussion paper looks at the implications of big data, AI and ML for data protection, and explains the ICO’s views on these.

Sandbox reports

Good With (2023) - Good With’s Financial Virtual Assistant (FVA) app combines various personal data sources to produce a ‘financial readiness score’. Good With hopes this readiness score will help to provide the FVA’s users (18-24-year-olds), who might struggle to obtain credit via traditional methods, with fairer access to financial products and services. The readiness score will be informed by insights drawn, using Artificial Intelligence, from the user’s interactions with the FVA’s chatbot, their open banking data and their progression through a bespoke educational pathway.

FlyingBinary (2022) – FlyingBinary is a ‘deep tech’ company which provides innovative products and services in the information technology and online sector, including artificial intelligence (AI). FlyingBinary entered the Sandbox to develop an online service which seeks to assist with the traditional mental healthcare of patients with pathologies such as eating disorders.

Tonic Analytics (2021) – Tonic Analytic’s primary focus is on the ethical use of innovative data analytics technology to improve road safety while also preventing and detecting crime.

Onfido Limited (2020) - Onfido Limited has worked in the Sandbox to identify and mitigate bias present in the biometric identity verification technology it designed to enable its clients to prove that their customers are who they claim to be.

Audit reports

MeVitae executive summary of the audit report (2021) - MeVitae aim to develop artificial intelligence tools to mitigate bias and discrimination from the hiring process.

Our work with other regulators

AI Working Groups

Regulators and AI Working Group – The ICO chairs an informal working group for regulators focusing on AI issues. The group was established based on the principles of information sharing, co-ordination and harmonisation. It acts as forum for the development of a collaborative and multilateral approach to AI regulation by existing UK regulators.

GPA Working Group on Ethics and Data Protection in AI – Permanent WG at the Global Privacy Assembly tasked with promoting the principles of the GPA’s Declaration on Ethics and Data Protection in AI through a series of work packages. The ICO is lead rapporteur or co-rapporteur on work in AI and employment and AI risk management. The GPA comprises more than 130 data protection authorities from around the world, out of which 20 are represented in the AI WG. The Council of Europe, the Fundamental Rights Agency and the International Committee of the Red Cross are observers.