Artificial intelligence (AI) is a priority area for the ICO due to the potential to pose a high risk to individuals and their rights and freedoms. We believe public trust is paramount for the safe adoption of AI across sectors and as a regulator we will continue to respond to the demand for more work in this space.
Our current areas of focus are:
- fairness in AI;
- dark patterns;
- AI and recommender systems; and
- privacy and confidentiality in explainable AI.
This page details our work on AI to date including:
If you would like to contact us, please email us.
Guidance on AI and data protection (2020) – this guidance covers best practice for data protection compliant AI and how data protection law applies to AI systems that process personal data.
Explaining decisions made with AI (2020) – this co-badged guidance by the ICO and The Alan Turing Institute (The Turing) aims to give organisations practical advice to help explain the processes, services and decisions delivered or assisted by AI, to the individuals affected by them.
AI and data protection risk toolkit (2022) – our AI toolkit is designed to provide further practical support to organisations assessing the risks to individual rights and freedoms caused by their own AI systems.
Toolkit for organisations considering using data analytics (2020) – this toolkit helps organisations recognise some of the central risks to the rights and freedoms of individuals created by the use of data analytics.
Age Assurance for the Children’s Code (2021) - this Opinion is for providers of Information Society Services (ISS) likely to be accessed by children, and providers of age assurance products, services and applications that those ISS may use to conform with the code. Section 4.2 looks at age assurance and AI.
The use of live facial recognition technology by law enforcement in public places (2019) –this Opinion looks at how the ICO regulates the processing of personal data when law enforcement organisations use facial recognition technology in public spaces. It aims to guide law enforcement through all the stages of the processing.
The use of live facial recognition technology in public places (2021) – this Opinion looks at the use of live facial recognition (FRT) outside of law enforcement. It explains how data protection law applies to FRT and the assessments the ICO expects organisation to be do made before using it.
Department for Digital, Culture, Media & Sport consultation: "Data: a new direction" (2021) – the ICO’s response to the consultation on future data protection reform. Section 1.4 provides our response to reforms relating to AI and ML.
Proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence act) and amending certain union legislative acts (2021) – the ICO’s response to the AI act.
Project ExplAIn interim report (2019) – a collaboration between the ICO and The Alan Turing Institute (The Turing). As part of this project, the ICO and The Turing conducted research to understand how organisations could explain AI decisions to the individuals affected.
Big data, artificial intelligence, machine learning and data protection (2017) – this discussion paper looks at the implications of big data, AI and ML for data protection, and explains the ICO’s views on these.
Gambling Commission (2021) - The Gambling Commission entered the Sandbox to explore the concept of Single Customer View (‘SCV’). SCV will allow data, which already exists around individual player behaviours, to be aggregated to drive better decision making, actions and evaluation to reduce gambling related harms across all online gambling operators.
Greater London Authority (2021) - Greater London Authority wished to use the ICO Sandbox process to support the development and enhancement of an already existing multi-agency data platform that they host (SafeStats). This would facilitate the use of a public health approach to violence reduction and align closely to the work of the London-based Violence Reduction Unit, helping to inform violence-related decision-making processes.
Novartis (2021) - The ultimate goal of the Novartis project is to make patient care easier and more efficient, such as by enabling clinicians to access data about their patients’ conditions remotely, as well as being able to share information with their patients.
Tonic Analytics (2021) – Tonic Analytic’s primary focus is on the ethical use of innovative data analytics technology to improve road safety while also preventing and detecting crime.
Future Flow Research Inc (2020) - Future Flow Research Inc provides an analytics platform which monitors the flow of funds in the financial system with the potential to combat financial crime.
Heathrow Airport Limited (2020) - Heathrow Airport’s Automation of the Passenger Journey programme aimed to streamline the passenger journey by using biometrics.
Onfido Limited (2020) - Onfido Limited has worked in the Sandbox to identify and mitigate bias present in the biometric identity verification technology it designed to enable its clients to prove that their customers are who they claim to be.
MeVitae executive summary of the audit report (2021) - MeVitae aim to develop artificial intelligence tools to mitigate bias and discrimination from the hiring process.
AI Working Groups
Regulators and AI Working Group – The ICO chairs an informal working group for regulators focusing on AI issues. The group was established based on the principles of information sharing, co-ordination and harmonisation. It acts as forum for the development of a collaborative and multilateral approach to AI regulation by existing UK regulators.
GPA Working Group on Ethics and Data Protection in AI – Permanent WG at the Global Privacy Assembly tasked with promoting the principles of the GPA’s Declaration on Ethics and Data Protection in AI through a series of work packages. The ICO is lead rapporteur or co-rapporteur on work in AI and employment and AI risk management. The GPA comprises more than 130 data protection authorities from around the world, out of which 20 are represented in the AI WG. The Council of Europe, the Fundamental Rights Agency and the International Committee of the Red Cross are observers.