The UK’s independent authority set up to uphold information rights in the public interest, promoting openness by public bodies and data privacy for individuals.

At a glance

This guidance covers what we think is best practice for data protection-compliant AI, as well as how we interpret data protection law as it applies to AI systems that process personal data. The guidance is not a statutory code. It contains advice on how to interpret relevant law as it applies to AI, and recommendations on good practice for organisational and technical measures to mitigate the risks to individuals that AI may cause or exacerbate.

In detail

Why have you produced this guidance?

We see new uses of artificial intelligence (AI) everyday, from healthcare to recruitment, to commerce and beyond.

We understand the benefits that AI can bring to organisations and individuals, but there are risks too. That’s why AI is one of our top three strategic priorities, why enabling good practice in AI is one of our regulatory priorities over the coming months, and why we decided to develop a framework for auditing AI compliance with data protection obligations.

The framework:

  • gives us a clear methodology to audit AI applications and ensure they process personal data fairly, lawfully and transparently;
  • ensures that the necessary measures are in place to assess and manage risks to rights and freedoms that arise from AI; and
  • supports the work of our investigation and assurance teams when assessing the compliance of organisations using AI.

As well as using the framework to guide our own audit and enforcement activity, we also wanted to share our thinking behind it. The framework therefore has three distinct outputs:

  1. Auditing tools and procedures which our investigation and assurance teams will use when assessing the compliance of organisations using AI. The specific auditing and investigation activities they undertake vary, but can include off-site checks, on-site tests and interviews, and in some cases the recovery and analysis of evidence, including AI systems themselves.
  2. This detailed guidance on AI and data protection for organisations, which outlines our thinking.
  3. A toolkit designed to provide further practical support to organisations auditing the compliance of their own AI systems (forthcoming).

This guidance covers what we think is best practice for data protection-compliant AI, as well as how we interpret data protection law as it applies to AI systems that process personal data.

This guidance is not a statutory code. It contains advice on how to interpret relevant law as it applies to AI, and recommendations on good practice for organisational and technical measures to mitigate the risks to individuals that AI may cause or exacerbate. There is no penalty if you fail to adopt good practice recommendations, as long as you find another way to comply with the law.

Further reading outside this guidance

ICO Technology Strategy 2018-2021

What do you mean by ‘AI’?

Data protection law does not use the term ‘AI’, so none of your legal obligations depend on exactly how it is defined. However, it is useful to understand broadly what we mean by AI in the context of this guidance. AI has a variety of meanings, including:

We use the umbrella term ‘AI’ because it has become a standard industry term for a range of technologies. One prominent area of AI is ‘machine learning’ (ML), which is the use of computational techniques to create (often complex) statistical models using (typically) large quantities of data. Those models can be used to make classifications or predictions about new data points.

While not all AI involves ML, most of the recent interest in AI is driven by ML in some way, whether in image recognition, speech-to-text, or classifying credit risk. This guidance therefore focuses on the data protection challenges that ML-based AI may present, while acknowledging that other kinds of AI may give rise to other data protection challenges.

You may already process personal data in the context of creating statistical models, and using those models to make predictions about people. Much of this guidance will still be relevant to you even if you do not class these activities as ML or AI. Where there are important differences between types of AI, for example, simple regression models and deep neural networks, we will refer to these explicitly.

How does this guidance relate to other ICO work on AI?

This guidance is designed to complement existing ICO resources, including:

The Big Data report provided a strong foundation for understanding the data protection implications of these technologies. As noted in the Commissioner’s foreword to the 2017 edition, this is a complicated and fast-developing area. New considerations have arisen in the last three years, both in terms of the risks AI poses to individuals, and the organisational and technical measures that can be taken to address those risks. Through our engagement with stakeholders, we gained additional insights into how organisations are using AI on the ground, which go beyond those presented in the 2017 report.

Another significant challenge raised by AI is explainability. As part of the government’s AI Sector Deal, in collaboration with the Alan Turing Institute (The Turing) we have produced guidance on how organisations can best explain their use of AI to individuals. This resulted in the ‘Explaining decisions made with AI’  guidance, which was published in May 2020.

While the Explaining decisions made with AI guidance already covers the challenge of AI explainability for individuals in substantial detail, this guidance includes some additional considerations about AI explainability within the organisation, eg for internal oversight and compliance. The two pieces of guidance are complementary, and we recommend reading them together.

What is a risk-based approach to AI?

Taking a risk-based approach means:

  • assessing the risks to the rights and freedoms of individuals that may arise when you use AI; and
  • implementing appropriate and proportionate technical and organisational measures to mitigate these risks.

These are general requirements in data protection law. They do not mean you can ignore the law if the risks are low, and they may mean you have to stop a planned AI project if you cannot sufficiently mitigate those risks.

To help you integrate this guidance into your existing risk management process, we have organised it into several major risk areas. For each risk area, we describe:

  • the risks involved;
  • how AI may increase their likelihood and/or impact; and
  • some possible measures which you could use to identify, evaluate, minimise, monitor and control those risks.

The technical and organisational measures included are those we consider good practice in a wide variety of contexts. However, since many of the risk controls that you may need to adopt are context-specific, we cannot include an exhaustive or definitive list.

This guidance covers both the AI-and-data-protection-specific risks, and the implications of those risks for governance and accountability. Regardless of whether you are using AI, you should have accountability measures in place.

However, adopting AI applications may require you to re-assess your existing governance and risk management practices. AI applications can exacerbate existing risks, introduce new ones, or generally make risks more difficult to assess or manage. Decision-makers in your organisation should therefore reconsider your organisation’s risk appetite in light of any existing or proposed AI applications.

Each of the sections of this guidance deep-dives into one of the AI challenge areas and explores the associated risks, processes, and controls.

Is this guidance a set of AI principles?

This guidance does not provide generic ethical or design principles for the use of AI. While there may be overlaps between ‘AI ethics’ and data protection (with some proposed ethics principles already reflected in data protection law), this guidance is focused on data protection compliance.

Although data protection does not dictate how AI designers should do their jobs, if you use AI to process personal data, you need to comply with the principles of data protection by design and by default.

Certain design choices are more likely to result in AI systems which infringe data protection in one way or other. This guidance will help designers and engineers understand those choices better, so you can design high-performing systems whilst still protecting the rights and freedoms of individuals.

It is worth noting that our work focuses exclusively on the data protection challenges introduced or heightened by AI. Therefore, more general data protection considerations, are not addressed in this guidance, except in so far as they relate to and are challenged by AI. Neither does it cover AI-related challenges which are outside the remit of data protection.

What legislation applies?

This guidance deals with the challenges that AI raises for data protection. The most relevant piece of UK legislation is the Data Protection Act 2018 (DPA 2018).

The DPA 2018 sets out the UK’s data protection framework, alongside the General Data Protection Regulation (GDPR). Please note that from January 2021, you should read references to the GDPR as references to the equivalent articles in the UK GDPR. The DPA 2018 comprises the following data protection regimes:

  • Part 2 – supplements and tailors the GDPR in the UK;
  • Part 3 – sets out a separate regime for law enforcement authorities; and
  • Part 4 – sets out a separate regime for the three intelligence services.

Most of this guidance will apply regardless of which part of the DPA applies to your processing. However, where there are relevant differences between the requirements of the regimes, these are explained in the text.

You should also review our guidance on how Brexit impacts data protection law.

The impacts of AI on areas of ICO competence other than data protection, notably Freedom of Information, are not considered here.

Further reading outside this guidance

See our guide to data protection.

If you need more detail on data protection and Brexit, see our FAQs.

How is this guidance structured?

This guidance is divided into several parts covering different data protection principles and rights.

Part one addresses issues that primarily relate to the accountability principle. This requires you to be responsible for complying with the data protection principles and for demonstrating that compliance. Sections in this part deal with the AI-specific implications of accountability including data protection impact assessments (DPIAs), and controller / processor responsibilities.

Part two covers the lawfulness, fairness, and transparency of processing personal data in AI systems, with sections covering lawful bases for processing personal data in AI systems, assessing and improving AI system performance and mitigating potential discrimination to ensure fair processing.

Part three covers the principles of security and data minimisation in AI systems.

Part four covers compliance with individual rights, including rights relating to solely automated decisions. In particular, part four covers how you can ensure meaningful human input in non-automated or partly-automated decisions, and meaningful human review of solely automated decisions.

Who is this guidance for?

This guidance covers best practices for data protection-compliant AI. There are two broad audiences.

First, those with a compliance focus, including:

  • data protection officers (DPOs);
  • general counsel;
  • risk managers;
  • senior management; and
  • the ICO’s own auditors – in other words, we will use this guidance as a basis to inform our audit functions under the data protection legislation.

Second, technology specialists, including:

  • machine learning developers and data scientists;
  • software developers / engineers; and
  • cybersecurity and IT risk managers.

The guidance is split into four sections that cover areas of data protection legislation that you need to consider.

While this guidance is written to be accessible to both audiences, some parts are aimed primarily at those in either compliance or technology roles and are signposted accordingly at the start of each section as well as in the text.

Parts one and four are primarily aimed at those working in a compliance role. However, they do contain some technical details which may need to be discussed with relevant technology specialists in your organisation.

Parts two and three contain both legal and substantial technical material, and you may therefore benefit from working through them alongside relevant technology experts in your organisation.

How should we use this guidance?

In each section, we discuss what you must do to comply with data protection law as well as what you should do as good practice. This distinction is generally marked using ‘must’ when it relates to compliance with data protection law and using ‘should’ where we consider it good practice but not essential to comply with the law. Discussion of good practice is designed to help you if you are not sure what to do, but it is not prescriptive. It should give you enough flexibility to develop AI systems which conform to data protection law in your own way, taking a proportionate and risk-based approach.

The guidance assumes familiarity with key data protection terms and concepts. We also discuss in more detail data protection-related terms and concepts where it helps to explain the risks that AI creates and exacerbates.

The guidance also assumes familiarity with AI-related terms and concepts. We have included a glossary at the end of the guidance as a quick reference point for concepts and measures included in the main text.

The guidance focuses on specific risks and controls to ensure your AI system is compliant with data protection law and provides safeguards for individuals’ rights and freedoms. It is not intended as an exhaustive guide to data protection compliance. You need to make sure you are aware of all your obligations and you should read this guidance alongside our other guidance. Your DPIA process should incorporate measures to comply with your data protection obligations generally, as well as conform to the specific standards in this guidance.

---

Give your feedback

We will continue to develop this guidance to ensure it stays relevant.

We would like to continue to consult with those using the guidance to understand how it works in practice and ensure  it remains relevant and consistent with emerging developments.

We are also interested in what tools the ICO could create to compliment the guidance and support you to implement it in practice.

If you would like to contribute to future consultations please provide your details below. For information on what we do with your personal data, see our privacy notice.