Latest updates
15 March 2023 - This is a new chapter with new and old content.
The old content was extracted from the former chapter titled ‘What do we need to do to ensure lawfulness, fairness, and transparency in AI systems?’. The new content includes information on:
- Data protection’s approach to fairness, how it applies to AI and a non-exhaustive list of legal provisions to consider.
- The difference between fairness, algorithmic fairness, bias and discrimination.
- High level considerations when thinking about evaluating fairness and inherent trade-offs.
- Processing personal data for bias mitigation.
- Technical approaches to mitigate algorithmic bias.
- How are solely automated decision-making and relevant safeguards linked to fairness, and key questions to ask when considering Article 22 of the UK GDPR.
At a glance
This section explains how you should interpret data protection’s fairness principle as it applies to AI. It highlights some of the provisions that can serve as a roadmap towards compliance. It sets out why fairness in data protection is not just about discrimination alongside how important the safeguards on solely automated decision-making that Article 22 provides are for fairness.
Fairness and discrimination are also concepts that exist in other legislation and regulatory frameworks that are relevant when using AI systems, in particular, the Equality Act 2010. This guidance only covers these two concepts in relation to data protection law. You will have other obligations in relation to fairness and discrimination that you will need to consider in addition to this guidance. You should read this in conjunction with Annex A that contains organisational and technical good practice measures to mitigate unfairness.
Who is this section for?
This section is aimed at senior management and those in compliance-focused roles, including DPOs, who are accountable for the governance and data protection risk management of an AI system. It will also be useful for technical specialists who are using, modifying, deploying or generally engaging with AI systems that process personal data. As well as members of the public who want to understand how the fairness principle interacts with AI.
In detail
- How does data protection approach fairness?
- What does data protection fairness mean at a high level?
- How does data protection fairness apply in an AI context?
- What about fairness, bias and discrimination?
- Is AI the best solution to begin with?
- What is the impact of Article 22 of the UK GDPR on fairness?
How does data protection approach fairness?
Fairness is a key principle of data protection and an overarching obligation when you process personal data. You must use personal data fairly to comply with various sections of the legislation, including Article 5(1)(a) of the UK GDPR, Section 2(1)(a) of the Data Protection Act (2018), as well as Part 3 and Part 4 of the legislation.
In simple terms, fairness means you should only process personal data in ways that people would reasonably expect and not use it in any way that could have unjustified adverse effects on them. You should not process personal data in ways that are unduly detrimental, unexpected or misleading to the individuals concerned.
If you use an AI system to infer data about people, you need to ensure that the system is sufficiently statistically accurate and avoids discrimination. This is in addition to considering the impact of individuals’ reasonable expectations for this processing to be fair.
Any processing of personal data using AI that leads to unjust discrimination between people, will violate the fairness principle. This is because data protection aims to protect individuals’ rights and freedoms with regard to the processing of their personal data, not just their information rights. This includes the right to privacy but also the right to non-discrimination. The principle of fairness appears across data protection law, both explicitly and implicitly. More specifically, fairness relates to:
- how you go about the processing; and
- the outcome of the processing (ie the impact it has on individuals).
Depending on your context, you may also have other sector-specific obligations about fairness, statistical accuracy or discrimination that you need to consider alongside your data protection obligations. You will also need to consider your Equality Act 2010 obligations: for more information you should refer to the Equality and Human Rights Commission (EHRC). If you need to process data in a certain way to meet those obligations, data protection does not prevent you from doing so.
We are likely to work closely with other regulators and experts to assess the fairness of outcomes and whether any adverse effects on individuals are justified. For example, the Financial Conduct Authority expects organisations to treat customers fairly. Also, the Consumer Rights Act 2015 that the Competition and Markets Authority oversees, requires businesses entering contracts with consumers to ensure contract terms or notices are fair.
Further reading outside this guidance
For more on the role of fairness in data protection see ‘Data Protection and the Role of Fairness’.
What does data protection fairness mean at a high level?
Article 5(1)(a) of the UK GDPR requires that you process personal data “lawfully, fairly and in a transparent manner”. In brief:
- “lawfully” means your processing must satisfy an Article 6 lawful basis (and, if required, an Article 9 condition), as well being lawful in more general terms;
- “fairly” means your processing should not lead to unjustified adverse outcomes for individuals, and should be within their reasonable expectations; and
- “in a transparent manner” means you must properly inform individuals about how and why you intend to process their personal data.
These three elements overlap and support each other. Together, they provide an overall framework that ensures fair processing of personal data and fair outcomes for individuals.
This means that fairness is not just about making sure people know and understand how and why you use their data. It is also not just about ensuring people have control over the processing, where appropriate.
It is essentially about taking into account the overall impact your processing has on people, and how you demonstrate that this impact is proportionate and justified.
How does data protection fairness apply in an AI context?
It is important to note, particularly in AI, that data protection’s specific requirements and considerations work together to ensure your AI systems process personal data fairly and lead to fair outcomes. These are listed below:
1. Data protection by design and by default
Data protection by design and by default requires you to consider data protection issues at the design stage of your processing activities, and throughout the AI lifecycle. This is set out in Article 25 of the UK GDPR.
It is about putting in place technical and organisational measures to:
- implement the data protection principles effectively; and
- integrate necessary safeguards into your processing.
AI increases the importance of embedding a data protection by design approach into your organisation’s culture and processes.
Even though AI also brings additional complexities compared to conventional processing, you should demonstrate how you have addressed these, for reasons of accountability and fairness. The AI lifecycle may surface the ‘problem of many hands’. This means that even though a large number of individuals are involved, the role of individuals in isolation may appear small. This makes it complicated to attribute responsibility for potential harms. The more significant the impact your AI system has on individuals who personal data it processes, the more attention you should give to these complexities, considering what safeguards are appropriate. You need to have proportionate and adequate safeguards for the processing to be fair.
You should take into account a variety of possible effects when considering impact, from material to non-material harm. This includes emotional consequences such as distress, and any significant economic or social disadvantage.
Further reading
2. Data protection impact assessments
DPIAs require you to consider the risks to the rights and freedoms of individuals, including the potential for any significant social or economic disadvantage. The focus is on the potential for harm to individuals or society at large, whether it is physical, material or non-material. This means that DPIAs do not just account for data rights but rights and freedoms more broadly and are an integral part of data protection by design and by default. They provide you with an opportunity to assess whether your processing will lead to fair outcomes.
DPIAs also require you to assess and demonstrate how and why:
- your processing is necessary to achieve your purpose; and
- the way you go about your processing is a proportionate way of achieving it.
Documenting the choices you make in the design of your AI system, alongside your assessment of the risks it poses, also helps you demonstrate that your processing is fair. DPIAs help you address AI risks if you see them as not just a box-ticking compliance exercise, but as an ongoing process, subject to regular review.
Further reading
3. Lawfulness
As noted above, many of the lawful bases for processing require you to assess:
- why the processing is necessary; and
- whether the way you go about it is proportionate to the outcome you seek to achieve.
Merely working through the requirements of a particular lawful basis does not automatically make your processing fair. However, it can go some way to demonstrating how your AI system:
- processes personal data fairly; and
- ensures its impacts on individuals are also fair.
Identifying and meeting the requirements of a lawful basis is likely to reduce the potential of unfair outcomes arising from your processing. This is because the lawful bases themselves provide a level of protection for individuals.
For example, a number of them require you to demonstrate the necessity and proportionality of your processing. This means that it meets a specific and limited purpose, and there is no reasonable and less intrusive way of achieving the same purpose. This “necessity test” acts to limit the processing, and in turn the potential of unjustified adverse effects on individuals.
This is most obviously the case with the legitimate interests lawful basis. This includes a “three-part test” that requires you to show:
- what the legitimate interest is;
- why the processing is necessary to achieve it; and
- how you balance the rights, freedoms and interests of individuals with your own (or those of third parties).
If you decide to use legitimate interest as your lawful basis, you should record how you assessed this test in your Legitimate interest assessment (LIA). The ICO has produced a sample LIA template you can adjust to your context and use to decide whether or not the legitimate interests basis is likely to apply to your processing.
Further reading in this guidance:
4. Transparency
Transparency is about being clear, open and honest with individuals from the start about who you are, as well as why and how you want to process their data. This enables them to make an informed choice about whether they wish to enter into a relationship with you, or try and negotiate its terms.
Transparency and fairness are closely linked. Recitals 39 and 60 of the UK GDPR highlight the importance of providing information to individuals to ensure “fair and transparent processing”. You are well-placed to demonstrate transparency, if you are open with people about how your AI system makes decisions about them, and how you use their personal data to train and test the system.
There are different ways you can explain AI decisions. These generally include two main categories:
- process-based explanations. These demonstrate how you have followed good governance processes and best practices throughout your system’s design and use; and
- outcome-based explanations. These clarify the results of a particular decision. For example, explaining the reasoning in plain, clearly understandable and everyday language.
Outcome-based explanations also cover things such as whether:
- there was meaningful human involvement in the decision; and
- the actual outcome of your AI system’s decision-making meets the criteria you established in your design process.
One type of explanation is the “fairness explanation”. This is about helping people understand the steps you take to ensure your AI decisions are generally unbiased and equitable. Fairness explanations in AI can take four main approaches:
- dataset fairness;
- design fairness;
- outcome fairness; and
- implementation fairness.
Our guidance on explaining decisions made with AI provides more detail on the different ways of explaining AI decisions.
Further reading
5. Purpose limitation
Purpose limitation requires you to be clear and open about why you are collecting personal data and that what you intend to do with it is in line with individuals’ reasonable expectations.
Being clear about why you want to process personal data helps you ensure your processing is fair, as well as enabling you to demonstrate your accountability for it. For example, specifying your purpose helps avoid function creep.
If you plan to re-use personal data for a new purpose you usually need to ensure it is compatible with the original. This depends on a number of factors, including your relationship with the individuals. You can also use the data for a new purpose, if you get specific consent for the new purpose or if you have clear obligation set out in law.
Purpose limitation therefore clearly links to fairness, lawfulness and transparency.
AI systems may process personal data for different purposes at different points in the AI lifecycle. However, defining these purposes in advance and determining an appropriate lawful basis for each is a vital part of ensuring you undertake the processing fairly, and that it results in fair outcomes for individuals.
It may make sense to separate particular stages in the AI lifecycle and define the purposes for processing personal data in each one. For example, the data exploration and development phases of an AI system, including stages such as design and model selection, are distinct from the deployment and ongoing monitoring phases.
Further reading in this guidance
How do we identify our purposes and lawful basis when using AI?
6. Data minimisation and storage limitation
The data minimisation principle is about only collecting the personal data you need to achieve your purpose. The storage limitation principle is about keeping that data only for as long as you need it.
Both principles relate to fairness as they involve necessity and proportionality considerations. For example:
- you must not process more data than you need just because that data may become useful at some point in the future. This processing is not necessary for your purpose. This raises additional questions about whether it is fair and lawful in the first place, beyond what the impact of processing more data than you need has on individuals; and
- if your AI system keeps data for longer than you need to achieve your purpose, this processing is also unnecessary and therefore unfair. You must take a proportionate approach to retention periods, balancing your needs with the impact of the retention on individuals’ privacy.
Clearly establishing what data you need, and how long you need it for, allows you to demonstrate your compliance with these two principles. In turn, this can also demonstrate how your processing is fair overall.
As this guidance notes, data minimisation can appear challenging to achieve in AI. However, the data minimisation principle does not mean your AI system cannot process personal data at all. Instead, it requires you to be clear about what personal data is adequate, relevant and limited, based on your AI system’s use case.
Further reading
7. Security
The security principle requires you to protect the data you hold from unauthorised or unlawful processing, accidental loss, destruction or damage.
Processing personal data securely is also part of ensuring your processing is fair. Recital 71 of the UK GDPR sets out that the technical and organisational security measures you put in place have to be appropriate to the risks your processing poses to the rights and freedoms of individuals.
Using AI to process any personal data has important implications for your security risk profile, and you need to assess and manage these carefully. For example, the UK GDPR’s security requirements apply to both the data you process and also the systems and services you use for that processing.
Further reading
8. Accountability
Failure to allocate accountability appropriately between processors and controllers can lead to non-compliance with the fairness principle. Accountability gaps can lead to unjustifiably adverse effects for individuals or undermine their ability to exercise rights. For example, the right to contest a solely automated decision with significant effects, which would go against individuals’ reasonable expectations. See the section ‘What are the accountability and governance implications of AI?’ for more detailed guidance on accountability.
9. Accuracy
The personal data you process to train your models or their outputs should be up-to-date and remain as such. Inaccurate input data can lead to inaccurate inferences about individuals that would go against their reasonable expectations or lead to adverse outcomes. For more in-depth analysis of how the accuracy principle applies to AI please read the chapter on accuracy and statistical accuracy.
10. Profiling and automated decision-making (ADM)
Article 22 of the UK GDPR restricts the processing of people’s personal data to make solely automated decisions that have legal or similarly significant effects on them, with certain exceptions.
Used correctly, profiling and automated decision-making can be useful for many organisations. For example, they help you make decisions fairly and consistently.
The UK GDPR does not prevent you from carrying out profiling or using an AI system to make automated decisions about people, unless those decisions are solely automated and have legal or similarly significant effects on them. In those cases, it places certain safeguards on that processing.
Recital 71 provides more clarity about Article 22. It directly references fairness, saying that you should take into account the “specific circumstances and context” of your processing and implement technical and organisational measures to ensure it is “fair and transparent”. These measures should:
- ensure personal data is processed in a manner that takes account of the risks to the rights and interests of individuals; and
- prevent discriminatory effects on the basis of special category data.
From a fairness perspective, profiling and automated decision-making in AI systems may give rise to discrimination. Annex A in this guidance addresses discrimination-specific risks of AI, or what has been referred to as “algorithmic fairness”.
As part of assessing whether your processing is fair, you also need to think about whether any profiling is fundamentally linked with the reason for using your service. As well as what people would reasonably expect you to do with their data.
Further reading
11. Individual rights
Individuals have a number of rights relating to their personal data. These enable them to exercise control over that data by scrutinising your processing, and in some cases challenging it. For example, by exercising rights such as erasure, restriction and objection).
In AI, these rights apply wherever you use personal data at any point in the AI lifecycle and are important for fairness.
Ensuring you can support these rights requires you to think about the impact your processing has on individuals. For example, if you use solely automated systems to make a decision in an employment context that has a legal or similarly significant effect on individuals, you may need to put in place a process that would enable individuals to challenge a decision, depending on your context.
This is the case even when you are using a partly automated decision-making process that involves AI. Thinking about how you will tell individuals about how your system will use their data to train or contribute through its recommendations to decisions, will also make you think more carefully about its impact and consequences on them. In turn, this can help you ensure that what you tell them is clear and understandable. While this forms part of your obligations under the right to be informed, it can also shape individuals’ expectations about what your system does with their data.