The ICO exists to empower you through information.

At a glance

  • Anyone involved in the decision-making pipeline has a role to play in contributing to an explanation of a decision supported by an AI model’s result.
  • This includes what we have called the AI development team, as well as those responsible for how decision-making is governed in your organisation.
  • We recognise that every organisation has different structures for their AI development and governance teams, and in smaller organisations several of the functions we outline will be covered by one person.
  • Many organisations will outsource the development of their AI system. In this case, you as the data controller have the primary responsibility for ensuring that the AI system you use is capable of producing an explanation for the decision recipient.

Checklist

☐ We have identified the people who are in key roles across the decision-making pipeline and how they are responsible for contributing to an explanation of the AI system.

☐ We have ensured that different people along the decision-making pipeline are able to carry out their role in producing and delivering explanations, particularly those in AI development teams, those giving explanations to decision recipients, and our DPO and compliance teams.

☐ If we are buying the AI system from a third party, we know we have the primarily responsibility for ensuring that the AI system is capable of producing explanations.

In more detail

Who should participate in explanation extraction and delivery?

People involved in every part of the decision-making pipeline, including the AI model’s design and implementation processes, have a role to play in providing explanations to individuals who receive a decision supported by an AI model’s result.

In this section, we will describe the various roles in the end-to-end process of providing an explanation. In some cases, part of this process may not sit within your organisation, for example if you have procured the system from an external vendor. More information on this process is provided later in Part 3.

The roles discussed range from those involved in the initial decision to use an AI system to solve a problem and the teams building the system, to those using the output of the system to inform the final decision and those who govern how decision-making is done in your organisation. Depending on your organisation, the roles outlined below might be configured in different ways, or concentrated in just one or two people.

Please note that this is not an exhaustive list of all individuals that may be involved in contributing to an explanation for a decision made by an AI system. There may be other roles unique to your organisation or sector that are not outlined here. The roles listed below are the main ones we feel every organisation should consider when implementing an AI system to make decisions about individuals.

Overview of the roles involved in providing an explanation

Product manager: defines the product requirements for the AI system and determines how it should be managed, including the explanation requirements and potential impacts of the system’s use on affected individuals. The product manager is also be responsible throughout the AI system’s lifecycle. They are responsible for ensuring it is properly maintained, and that improvements are made where relevant. They also need to ensure that the system is procured and retired in compliance with all relevant legislation, including GDPR and DPA 2018.

AI development team: The AI development team performs several functions, including:

  • collecting, procuring and analysing the data that you input into your AI system, which must be representative, reliable, relevant, and up-to-date;
  • bringing in domain expertise to ensure the AI system is capable of delivering the types of explanations required. Domain experts could, for example, be doctors, lawyers, economists or engineers;
  • building and maintaining the data architecture and infrastructure that ensure the system performs as intended and that explanations can be extracted;
  • building, training and optimising the models you deploy in your AI system, prioritising interpretable methods;
  • testing the model, deploying it, and extracting explanations from it; and
  • supporting implementers in deploying the AI system in practice.

Please note that the AI development team may sit within your organisation, or be part of another organisation if you purchased the system from a third party. If you procure a system from a third party, you still need to ensure that you understand how the system works and how you can extract the meaningful information necessary to provide an appropriate explanation.

Implementer: where there is a human in the loop (ie the decision is not fully automated) the implementer relies on the model developed to supplement or complete a task in their everyday work life. In order to extract an explanation, implementers either directly use the model, if it is inherently interpretable and simple, or use supplementary tools and methods that enable explanation, if it is not. The tools and methods provide implementers with information that represents components of the rationale behind the model’s results, such as relative feature importance. Implementers take this information and consider it together with other evidence to make a decision on how to proceed.

Where a system is developed by a third party vendor, you should ensure that they provide sufficient training and support so that your implementers are able to understand the model you are using. If you do not have this support in place, your implementers may not have the skills and knowledge to deploy the system responsibly and to provide accurate and context sensitive explanations to your decision recipients.

Compliance teams, including DPO: these ensure that the development and use of the AI system comply with regulations and the your own policies and governance procedures. This includes compliance with data protection law, such as the expectation that AI-assisted decisions are explained to individuals affected by those decisions.

Senior management: this is the team with overall responsibility for ensuring the AI system that is developed and used within your organisation, or you procure from a third party, is appropriately explainable to the decision recipient.

We suggest that both the compliance teams, including DPO and senior management should expect assurances from the product manager that the system you are using provides the appropriate level of explanation to decision recipients. These assurances should give these roles a high level understanding of the systems and types of explanations they should and do produce. Additionally, there may be occasions when your DPO and/ or compliance teams interact directly with decision recipients. For example, if a complaint has been made. In these cases, they will need a more detailed understanding of how a decision has been reached, and they will need to be trained on how to convey this information appropriately to affected individuals.

Your AI system may also be subject to external audit, for example by the Information Commissioner’s Office (ICO) to assess whether your organisation is complying with data protection law. Data protection includes the expectation that decisions made with AI are explained to individuals affected by those decisions. During an audit you will need to produce all the documentation you’ve prepared and the testing you’ve undertaken to ensure that the AI system is able to provide the different types of explanation required.

As the focus of this guidance is on providing explanations to decision recipients, we will not go into detail about this here. However, if you would like more information on the documentation you are required to provide if you are subject to a GDPR audit, please read our Auditing Framework.

As you move along the decision-making pipeline, through the roles identified above, there will be a certain amount of translation and exposition required. That is, you will need to translate the reasoning behind the statistical outputs of the AI system for different audiences. Likewise, you will have to organise the documented innovation processes, which have ensured that your system is accountable, safe, ethical, and fair and make them accessible to these different audiences. Internally to your organisation this means to the implementer, DPO and compliance team, and to senior management. Externally this means translating what you have performed from a technical point of view into language and reasoning that is clear and easily understandable to the decision recipient and the external auditor.

Other roles: the roles we’ve identified are generic ones. If they don’t fit in your particular case or you have others, then you should consider how your roles relate to the decision making pipeline, and therefore to the task of providing an explanation.

What if we use an AI system supplied by a third party?

If you are sourcing your AI system, or significant parts of it, from a third party supplier, the functions and responsibilities may look different. Whether you do this or build it yourself, you as the data controller have the primary responsibility for ensuring that the AI system you use is capable of producing an appropriate explanation for the decision recipient. If you procure the system from a third party supplier that is off the shelf and does not contain inherent explainability, you may need another model alongside it.

Further reading

More information on supplementary models and techniques is in ‘Explaining AI in Practice’.