Increasingly, organisations are using artificial intelligence (AI) to support, or to make, decisions about individuals. If this is something you do, or something you are thinking about, this guidance is for you. It will help you explain the decisions you make to the individuals affected.

This guidance is modular. It comprises three parts. The parts relate and refer to each other, but each has a unique purpose that helps you with a different aspect of explaining decisions you make using AI systems. Depending on your experience, level of expertise, and the make-up of your organisation, some parts may be more relevant to you than others. You can pick and choose the parts that are most useful for you.

In brief

What is the guidance made up of?

Each part of the guidance is targeted a slightly different audiences within your organisation.

  • The basics of explaining AI [link to final consultation document] defines the key concepts and outlines a number of different explanation types. Four core principles inform the overarching approach you should take to ensure the fully automated or AI-enabled decisions you make can be appropriately explained to individuals. This part also sets out the applicable legal frameworks, and discusses some relevant benefits and risks. Everyone will find this useful, but particularly your DPO and compliance team.
  • Explaining AI in practice [link to final consultation document] helps you with the practicalities of explaining these decisions and providing explanations to individuals. It shows you how to go about selecting the appropriate explanation for your sector and use case, how to choose an appropriately explainable model, and which tools you can use to extract an explanation from less interpretable models. This part will primarily be helpful for technical teams in your organisation, although your DPO and compliance team will also find it helpful.
  • What explaining AI means for your organisation [link to final consultation document] goes into the various roles, policies, procedures and documentation that you can put in place to ensure your organisation is set up to provide meaningful explanations to affected individuals. This part is primarily aimed at your organisation’s senior management team, however your DPO and compliance team will also find it useful.

Why is this guidance from the ICO and The Alan Turing Institute?

The ICO is responsible for overseeing data protection in the UK, and The Alan Turing Institute (“The Turing”) is the UK’s national institute for data science and artificial intelligence.

In October 2017, Professor Dame Wendy Hall and Jérôme Pesenti published their independent review on growing the AI industry in the UK. The second of the report’s recommendations to support uptake of AI was for the ICO and The Turing to:

“…develop a framework for explaining processes, services and decisions delivered by AI, to improve transparency and accountability.”

In April 2018, the government published its AI Sector Deal. The deal tasked the ICO and The Turing to:

“…work together to develop guidance to assist in explaining AI decisions.”

The independent report and the Sector Deal are part of ongoing efforts made by national and international regulators and governments to address the wider implications of transparency and fairness in AI decisions impacting individuals, organisations, and wider society.

What is the status of this guidance?

This guidance is issued in response to the commitment in the Government’s AI Sector Deal, but it is not a statutory code of practice under the Data Protection Act 2018.

This is practical guidance that sets out good practice for explaining decisions to individuals that have been made using AI systems processing personal data. It clarifies the application of data protection provisions associated with explaining AI decisions, as well as highlighting other relevant legal regimes outside the ICO’s remit.

Other resources