Skip to main content

Legal framework

Contents

At a glance

The General Data Protection Regulation (GDPR) and the Data Protection Act 2018 (DPA 2018) regulate the collection and use of personal data. Where AI uses personal data it falls within the scope of this legislation. This can be through the use of personal data to train, test or deploy an AI system. Administrative law and the Equality Act 2010 are also relevant to providing explanations when using AI.

In more detail

What does data protection law have to do with AI?

In the UK, data protection law is made up of the GDPR and the DPA 2018. Together, they regulate the collection and use of personal data – information about identified or identifiable individuals. Please note that from January 2021 references to the GDPR should be read as references to the equivalent articles in the UK GDPR.

Where AI doesn’t involve the use of personal data, it falls outside the remit of data protection law. For example, the use of AI for weather forecasting or astronomy. But very often, AI does use or create personal data. In some cases, vast amounts of personal data are used to train and test AI models. On deployment, more personal data is collected and fed through the model to make decisions about individuals. Those decisions about individuals – even if they are only prediction or inferences – are themselves personal data.

Personal data used to train an AI model

Personal data used to test an AI model

On deployment, personal data used or created to make decisions about individuals

In any of these cases, AI is within the scope of data protection law.

Does data protection law actually mention AI?

Data protection law is technology neutral. It does not directly reference AI or any associated technologies such as machine learning.

However, the GDPR and the DPA 2018 do have a significant focus on large scale automated processing of personal data, and several provisions specifically refer to the use of profiling and automated decision-making. This means it applies to the use of AI to provide a prediction or recommendation about someone.

The right to be informed

Articles 13 and 14 of the GDPR give individuals the right to be informed of:

  • the existence of solely automated decision-making producing legal or similarly significant effects;
  • meaningful information about the logic involved; and
  • the significance and envisaged consequences for the individual.

The right of access

Article 15 of the GDPR gives individuals the right of access to:

  • information on the existence of solely automated decision-making producing legal or similarly significant effects;
  • meaningful information about the logic involved; and
  • the significance and envisaged consequences for the individual.

Recital 71 provides interpretative guidance on rights related to automated decision-making. It mainly relates to Article 22 rights, but also makes clear that individuals have the right to obtain an explanation of a solely automated decision after it has been made.

The right to object

Article 21 of the GDPR gives individuals the right to object to processing of their personal data, specifically including profiling, in certain circumstances.

There is an absolute right to object to profiling for direct marketing purposes.

Rights related to automated decision-making including profiling

Article 22 of the GDPR gives individuals the right not to be subject to a solely automated decision producing legal or similarly significant effects. There are some exceptions to this and in those cases it obliges organisations to:

  • adopt suitable measures to safeguard individuals, including the right to obtain human intervention;
  • express their view; and
  • contest the decision.

Recital 71 also provides interpretive guidance for Article 22.

Data protection impact assessments

Article 35 of the GDPR requires organisations to carry out Data Protection Impact Assessments (DPIAs) if their processing of personal data, particularly when using new technologies, is likely to result in a high risk to individuals.

A DPIA is always required for any systematic and extensive profiling or other automated evaluation of personal data which are used for decisions that produce legal or similarly significant effects on people.

DPIAs are therefore likely to be an obligation if you are looking to use AI systems to process personal data, and you should carry them out prior to the processing in order to identify and assess the levels of risk involved. DPIAs should be ‘living documents’ that you review regularly, and when there is any change to the nature, scope, context or purposes of the processing.

The ICO has published additional guidance on DPIAs, including a list of processing operations which require a DPIA. The list mentions AI, machine learning, large-scale profiling and automated decision-making resulting in denial of a service, product or benefit.

If a DPIA indicates there are residual high risks to the rights and freedoms of individuals that cannot be reduced, you must consult with the ICO prior to the processing.

Does data protection law require that we explain AI-assisted decisions to individuals?

As above, the GDPR has specific requirements around the provision of information about, and an explanation of, an AI-assisted decision where:

  • it is made by a process without any human involvement; and
  • it produces legal or similarly significant effects on an individual (something affecting an individual’s legal status/ rights, or that has equivalent impact on an individual’s circumstances, behaviour or opportunities, eg a decision about welfare, or a loan).

In these cases, the GDPR requires that you:

  • are proactive in “…[giving individuals] meaningful information about the logic involved, as well as the significance and envisaged consequences…” (Articles 13 and 14);
  • “… [give individuals] at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.” (Article 22); and
  • “… [give individuals] the right to obtain… meaningful information about the logic involved, as well as the significance and envisaged consequences…” (Article 15) “…[including] an explanation of the decision reached after such assessment…” (Recital 71).

The GDPR’s recitals are not legally binding, but they do clarify the meaning and intention of its articles. So, the reference to an explanation of an automated decision after it has been made in Recital 71 makes clear that such a right is implicit in Articles 15 and 22. You need to be able to give an individual an explanation of a fully automated decision to enable their rights to obtain meaningful information, express their point of view and contest the decision.

But even where an AI-assisted decision is not part of a solely automated process (because there is meaningful human involvement), if personal data is used, it is still subject to all the GDPR’s principles. The GDPR principles of fairness, transparency and accountability are of particular relevance.

Fairness

Part of assessing whether your use of personal data is fair is considering how it affects the interests of individuals. If an AI-assisted decision is made about someone without some form of explanation of (or information about) the decision, this may limit their autonomy and scope for self-determination. This is unlikely to be fair.

Transparency

Transparency is about being clear, open and honest with people about how and why you use their personal data. In addition to the information requirements on automated processing laid out in Articles 13 and 14 of the GDPR, Recital 60 states that you should provide any further information necessary to ensure fair and transparent processing taking into account the specific circumstances and context in which you process the personal data. It is unlikely to be considered transparent if you are not open with people about how and why an AI-assisted decision about them was made, or where their personal data was used to train and test an AI system. Providing an explanation, in some form, will help you be transparent. Information about the purpose for which you are processing someone’s data under Articles 13-15 of the GDPR could also include an explanation in some cases.

Accountability

To be accountable, you must be able to demonstrate your compliance with the other principles set out in Article 5 of the GDPR, including those of data minimisation and accuracy. How can you show that you treated an individual fairly and in a transparent manner when making an AI-assisted decision about them? One way is to provide them with an explanation of the decision and document its provision.

So, whichever type of AI-assisted decision you make (involving the use of personal data), data protection law still expects you to explain it to the individuals affected.

Parts 3 and 4 of the DPA 2018

In addition, there are separate provisions in Part 3 of the DPA 2018 for solely automated decisions that have an adverse legal effect or significantly affect the data subject and which are carried out for law enforcement purposes by competent authorities. Individuals can obtain human intervention, express their point of view, and obtain an explanation of the decision and challenge it. Currently, instances of solely automated decision-making in law enforcement are likely to be rare.

There are also separate provisions in Part 4 of the DPA 2018 for solely automated decision-making carried out by the intelligence services that significantly affect a data subject. Individuals have a right to obtain human intervention in these cases. There is also a general right for individuals to have information about decision-making where the controller is processing their data and the results produced by the processing are applied to them. In these cases, they can request “knowledge of the reasoning underlying the processing.” However, these rights may be limited by the exemption for safeguarding national security in Part 4.

Are there other relevant laws?

The GDPR is the main legislation in the United Kingdom that explicitly states a requirement to provide an explanation to an individual. Other laws may be relevant that mean it is good practice to explain AI-assisted decisions, and we have listed examples below.

Equality Act 2010

The Equality Act 2010 applies to a range of organisations, including government departments, service providers, employers, education providers, transport providers, associations and membership bodies, as well as providers of public functions.

Behaviour prohibited under the Equality Act 2010 is any that discriminates, harasses or victimises another person on the basis of any of these “protected characteristics”:

  • Age
  • Disability
  • Gender reassignment
  • Marriage and civil partnership
  • Pregnancy and maternity
  • Race
  • Religion and belief
  • Sex
  • Sexual orientation

If you are using an AI system in your decision-making process, you need to ensure, and be able to show, that this does not result in discrimination that:

  • causes the decision recipient to be treated worse than someone else because of one of these protected characteristics; or
  • results in a worse impact on someone with a protected characteristic than someone without one.

Reasonable adjustments mean that employers or those providing a service have a duty to avoid as far as possible by reasonable means the disadvantage that a disabled person experiences because of their impairments.

Therefore you should explain to the decision recipient that the decision is not discriminatory about any of the protected characteristics listed above. This explanation must be in a format that the decision recipient can meaningfully engage with.

Judicial review under administrative law

Anyone can apply to challenge the lawfulness of government decisions. This means that individuals are able to challenge the decision made by a public sector agency, or by private bodies contracted by government to carry out public functions, where they have deployed AI systems to support decision-making. It should be possible to judicially review these systems where public agencies have used them to make decisions about individuals, on the basis that the decision was illegal, irrational, or the way in which it was made was ‘improper’.

Additional legislation

An explanation may also indicate compliance or ‘best practice’ with other legislation. We do not plan to go into detail about these laws here, as they may be specific to your sector. We recommend you contact your own regulatory body if you are concerned this may apply to you.

Further legislation this may apply to includes (please note that this list is not intended to be exhaustive, and that further legislation may apply in some cases):

  • e-Privacy legislation
  • Law Enforcement Directive
  • Consumer Rights legislation
  • Financial Services legislation
  • Competition law
  • Human Rights Act
  • Legislation about health and social care
  • Regulation around advertising and marketing
  • Legislation about school admissions procedures