The principles to follow
At a glance
To ensure that the decisions you make using AI are explainable, you should follow four principles:
- be transparent;
- be accountable;
- consider the context you are operating in; and,
- reflect on the impact of your AI system on the individuals affected, as well as wider society.
In more detail
- Why are principles important?
- What are the principles?
- Be transparent
- Be accountable
- Consider context
- Reflect on impacts
- How do these principles relate to the explanation types?
Why are principles important?
AI-assisted decisions are not unique to one sector, or to one type of organisation. They are increasingly used in all areas of life. This guidance recognises this, so you can use it no matter what your organisation does. The principles-based approach of this guidance gives you a broad steer on what to think about when explaining AI-assisted decisions to individuals. Please note that these principles relate to providing explanations of AI-assisted decision to individuals, and complement the data protection principles in the GDPR.
The first two principles – be transparent and be accountable – share their names with GDPR principles, as they are an extension of the requirements under GDPR. This means that you are required to comply with your obligations under the GDPR, but we provide further guidance that should enable you to follow ‘best practice’ when explaining AI decisions.
What are the principles?
Each principle has two key aspects detailing what the principles are about and what they mean in practice. Parts of the guidance that support you to act in accordance with the different aspects of each principle are signposted.
Be transparent
What is this principle about?
The principle of being transparent is an extension of the transparency aspect of principle (a) in the GDPR (lawfulness, fairness and transparency).
In data protection terms, transparency means being open and honest about who you are, and how and why you use personal data.
Being transparent about AI-assisted decisions builds on these requirements. It is about making your use of AI for decision-making obvious and appropriately explaining the decisions you make to individuals in a meaningful way.
What are the key aspects of being transparent?
Raise awareness:
- Be open and candid about:
- your use of AI-enabled decisions;
- when you use them; and
- why you choose to do this.
- Proactively make people aware of a specific AI-enabled decision concerning them, in advance of making the decision.
Meaningfully explain decisions:
Don’t just giving any explanation to people about AI-enabled decisions - give them:
- a truthful and meaningful explanation;
- written or presented appropriately; and
- delivered at the right time.
(This is closely linked with the context principle.)
How can this guidance help us be transparent?
To help with raising awareness about your use of AI decisions read:
- Policies and procedures section of ‘What Explaining AI means for your organisation’; and
- Proactive engagement section in Task 6 of ‘Explaining AI in Practice’.
To support you with meaningfully explaining AI decisions read:
- Policies and procedures section of ‘What explaining AI means for your organisation’;
- Building your system to aid in a range of explanation types in Task 3 of ‘Explaining AI in Practice’.
- Selecting your priority explanations in Task 1 of ‘Explaining AI in Practice’.
- Explanation timing in Task 6 of ‘Explaining AI in practice’.
Be accountable
What is this principle about?
The principle of being accountable is derived from the accountability principle in the GDPR.
In data protection terms, accountability means taking responsibility for complying with the other data protection principles, and being able to demonstrate that compliance. It also means implementing appropriate technical and organisational measures, and data protection by design and default.
Being accountable for explaining AI-assisted decisions concentrates these dual requirements on the processes and actions you carry out when designing (or procuring/ outsourcing) and deploying AI models.
It is about ensuring appropriate oversight of your AI decision systems, and being answerable to others in your organisation, to external bodies such as regulators, and to the individuals you make AI-assisted decisions about.
What are the key aspects of being accountable?
Assign responsibility:
- Identify those within your organisation who manage and oversee the ‘explainability’ requirements of an AI decision system, and assign ultimate responsibility for this.
- Ensure you have a designated and capable human point of contact for individuals to query or contest a decision.
Justify and evidence:
- Actively consider and make justified choices about how to design and deploy AI models that are appropriately explainable to individuals.
- Take steps to prove that you made these considerations, and that they are present in the design and deployment of the models themselves.
- Show that you provided explanations to individuals.
How can this guidance help us be accountable?
To help with assigning responsibility for explaining AI decisions read:
- the Organisational roles and Policies and procedures sections of ‘What explaining AI means for your organisation’.
To support you with justifying the choices you make about your approach to explaining AI decisions read:
- all the tasks in ‘Explaining AI in practice’.
To help you evidence this read:
- the Policies and procedures and Documentation sections of ‘What explaining AI means for your organisation’.
Consider context
What is this principle about?
There is no one-size-fits-all approach to explaining AI-assisted decisions. The principle of considering context underlines this.
It is about paying attention to several different, but interrelated, elements that can have an effect on explaining AI-assisted decisions, and managing the overall process.
This is not a one-off consideration. It’s something you should think about at all stages of the process, from concept to deployment and presentation of the explanation to the decision recipient.
There are therefore several types of context that we address in this guidance. These are outlined in more detail in the ‘contextual factors’ section above.
What are the key aspects of considering context?
Choose appropriate models and explanation:
When planning on using AI to help make decisions about people, you should consider:
- the setting in which you will do this;
- the potential impact of the decisions you make;
- what an individual should know about a decision, so you can choose an appropriately explainable AI model; and
- prioritising delivery of the relevant explanation types.
Tailor governance and explanation:
Your governance of the ‘explainability’ of AI models should be:
- robust and reflective of best practice; and
- tailored to your organisation and the particular circumstances and needs of each decision recipient.
How can this guidance help us consider context?
To support your choice of appropriate models and explanations for the AI decisions you make read:
- ‘Explaining AI in practice’.
- The Contextual factors section of this document.
To help you tailor your governance of the explainability of AI decision systems you use read:
- the Organisational roles and Policies and procedures sections of ‘What explaining AI means for your organisation’.
Reflect on impacts
What is this principle about?
In making decisions and performing tasks that have previously required the thinking and reasoning of responsible humans, AI systems are increasingly serving as trustees of human decision-making. However, individuals cannot hold these systems directly accountable for the consequences of their outcomes and behaviours.
The value of reflecting on the impacts of your AI system helps you explain to individuals affected by its decisions that the use of AI will not harm or impair their wellbeing.
This means asking and answering questions about the ethical purposes and objectives of your AI project at the initial stages.
You should then revisit and reflect on the impacts identified in the initial stages of the AI project throughout the development and implementation stages. If any new impacts are identified, you should document them, alongside any mitigating factors you implement where relevant. This will help you explain to decision recipients what impacts you have identified and how you have reduced any potentially harmful effects as far as possible.
What are the key aspects of reflecting on impacts?
Individual wellbeing:
Think about how to build and implement your AI system in a way that:
- fosters the physical, emotional and mental integrity of affected individuals;
- ensures their abilities to make free and informed decisions about their own lives;
- safeguards their autonomy and their power to express themselves;
- supports their abilities to flourish, to fully develop themselves, and to pursue their interests according to their own freely determined life plans;
- preserves their ability to maintain a private life independent from the transformative effects of technology; and
- secures their capacities to make well-considered, positive and independent contributions to their social groups and to the shared life of the community, more generally.
Wellbeing of wider society:
Think about how to build and implement your AI system in a way that:
- safeguards meaningful human connection and social cohesion;
- prioritises diversity, participation and inclusion;
- encourages all voices to be heard and all opinions to be weighed seriously and sincerely;
- treats all individuals equally and protects social equity;
- uses AI technologies as an essential support for the protection of fair and equal treatment under the law;
- utilises innovation to empower and to advance the interests and well-being of as many individuals as possible; and
- anticipates the wider impacts of the AI technologies you are developing by thinking about their ramifications for others around the globe, for the biosphere as a whole and for future generations.
How can this guidance help us reflect on impacts?
For help with reflecting on impacts read:
- the different types of explanation above; and
- ‘Explaining AI in practice’.
To support you with justifying the choices you make about your approach to explaining AI decisions read:
- the different types of explanation above; and
- ‘Explaining AI in practice’.
To help you evidence this read:
- the Policies and procedures and Documentation sections of ‘What explaining AI means for your organisation’.
How do the principles relate to the explanation types?
The principles are important because they underpin how you should explain AI-assisted decisions to individuals. Here we set out how you can put them into practice by directly applying them through the explanations you use:
Principle | AI explanation and relevant considerations |
---|---|
Be transparent |
Rationale
Data • What data did you use to train the model? |
Be accountable |
Responsibility
|
Consider context |
See Task 1 of ‘Explaining AI in practice’ for more information on how context matters when choosing which explanation type to use, and which AI model. See the section above on contextual factors to see how these can help you choose which explanation types to prioritise in presenting your explanation to the decision recipient. |
Reflect on impacts |
Fairness
Safety and performance
Impact
|