The ICO exists to empower you through information.

In detail

What is the purpose of Article 22?

Data protection law applies to all automated individual decision-making and profiling. Article 22 of the UK GDPR has additional rules to protect individuals if you are carrying out solely automated decision-making that has legal or similarly significant effects on them.

This may apply in the AI context, eg where you are using an AI system to make these kinds of decisions. You may decide to use automated decision-making in order to achieve scale at speed, or reduce costs by employing fewer humans in a decision-making process.

However, you can only carry out this type of decision-making where the decision is:

  • necessary for the entry into or performance of a contract;
  • authorised by law that applies to you; or
  • based on the individual’s explicit consent.

You therefore have to identify if your processing falls under Article 22 and, where it does, make sure that you:

  • give individuals information about the processing;
  • introduce simple ways for them to request human intervention or challenge a decision; and
  • carry out regular checks to make sure your systems are working as intended.

In summary, Article 22 requirements essentially ensure that processing is fair by seeking to protect data subjects from solely automated decision making, save in circumstances where domestic law authorises it and provides suitable safeguards, or where it would otherwise be fair as the data subject has explicitly consented to it or it is necessary for the entry into or performance of a contract between the data subject and the controller.

What do we need to ask ourselves to understand whether our processing falls under Article 22?

AI systems can play a wide variety of roles, from decision-support, to triaging, to classifying or retrieving information. This means they can be involved at different stages of your decision-making process and to different degrees.

When an AI system is involved in a decision that impacts individuals in a legal or similarly significant way, you must ask:

  • what kind of decision is it (ie is it solely automated)?;
  • when does the decision take place?;
  • what is the context in which the system makes the decision?; and
  • what steps are involved in reaching it?

This will help you comply with data protection and Article 22 in particular.

A legal effect is something that affects someone’s legal rights. For example, someone’s entitlement to child or housing benefit. A similarly significant effect is more difficult to define but has the same sort of impact on someone’s circumstances or choices. For example, a computer decision to offer someone a job, or a decision to agree or decline a person’s mortgage application. These effects can be positive or negative.

Understanding the full context in which a decision takes place will help you identify:

  • whether Article 22 applies; and
  • if it does, what you need to do to comply with it in your own context.

To do this, you should ask yourself:

1. What is the actual decision?

It is crucial that you identify which step of the decision-making process produces or overwhelmingly determines the direct legal or similarly significant effects. This ensures you have clarity on the level of human agency over the final outcome.

2. When does the human-determined decision take place?

In general, mere human involvement in the AI lifecycle does not necessarily make the decision ‘AI-assisted’, nor does it qualify as meaningful human review. The sequencing of the human and AI factors is crucial.

In some cases for example, a human may provide input data into an AI system, that will then process it to make predictions or classifications.

If those outputs have significant or legal effects, Article 22 will apply because the decision itself is solely automated. The human’s involvement in the decision is not meaningful, as they are merely supplying the data that the system uses to make that decision.

In most cases, for human review to be meaningful, human involvement should come after the automated decision has taken place and it must relate to the actual outcome.

3. What is the context of the decision?

Identifying the structures, assumptions and conditions in which the decision takes place will help you have a clearer idea of the impact your system has. You should include the following considerations about:

  • how your AI system interacts with human reviewers;
  • the decision-making options that your system’s design or introduction creates or prevents;
  • the links between the AI outputs and the actual impact on groups and individuals; and
  • what, if any, human-led processes your AI system intends to replace.

Will impacted individuals be able to contest an automated decision?

For the processing to be fair and compliant with Article 22, individuals must be able to contest a decision in a timely manner. Meaningful transparency is fundamental to support this and you must put in place the appropriate measures to ensure individuals can exercise their rights. See the section on ‘What steps should we take to fulfil rights related to automated decision-making?’ for more information.

 

Further reading in other ICO guidance

Automated decision-making and profiling