The ICO exists to empower you through information.

In detail

Why is this important?

Tools for monitoring workers have become increasingly sophisticated, with automated processes (sometimes known as people analytics) often used for:

  • security purposes;
  • managing workers’ performance; and
  • monitoring sickness and attendance (including if a worker is away from their workstation).

There are business benefits to people analytics. They can contribute to improving organisational performance and can demonstrate compliance with HR policies. Such tools have the capacity to process large amounts of workers’ information by monitoring in real time. This can be used to make predictions, inferences and decisions about workers on both an individual and a collective level. The UK GDPR has provisions on solely automated decision-making with legal or similarly significant effects, including profiling. We cover them here in the context of monitoring workers.

What do we mean by solely automated decision-making and profiling?

Solely automated decision-making is a decision made by automated means without any meaningful human involvement. Solely automated decision-making may involve profiling too. In a work context, this could be where employers use workers’ information from a number of sources to make inferences about future behaviour or make decisions about them.

Solely automated decision-making and profiling could pose risks to the rights and freedoms of workers.

Further reading

What do we need to consider if we are planning to make solely automated decisions with legal or similar effect on workers?

Article 22 of the UK GDPR restricts you from carrying out solely automated decision-making that has legal or similarly significant effects on people.

A legal effect is something that affects someone’s legal rights (eg a right to work). Similarly significant effects are more difficult to define, but are likely to include decisions that:

  • significantly affect someone’s financial circumstance (eg increasing or decreasing a worker’s pay based on their performance at work); or
  • affect a worker’s employment opportunities (eg dismissing someone).

You can only carry out this type of decision-making where the decision is:

  • necessary for the entry into or performance of a contract with the person;
  • authorised by law that applies to you (eg if you have a statutory or common law obligation to do something and automated decision-making is the most appropriate way to achieve your purpose); or
  • based on a person’s explicit consent.

You must also ensure that you do not disadvantage workers who ask for human intervention in decision-making compared to those who are subject to automated decision-making.

Example - where Article 22 applies

An organisation pays workers based entirely on automated monitoring of their productivity. This decision is solely automated and has a significant effect, since it affects how much a worker is paid. Therefore, the additional rules under Article 22 apply.

 

Example – where Article 22 doesn’t apply

A courier service uses an automated vehicle tracking device to determine if its workers are making deliveries on time and to the correct address.

A worker is issued a warning about failing to make deliveries on time. The warning was based on complaints received from customers about not receiving their orders. These complaints were checked by the courier service’s HR manager who reviewed the vehicle’s tracking device data. This showed that the vehicle only made a small proportion of journeys it was expected to make. The manager also discussed the issue with the worker to ask about the delays and complaints before deciding to issue the warning.

Therefore, additional rules under Article 22 do not apply as the courier service’s HR manager took the decision to issue the warning after reviewing the information. This is the case even though the warning was issued on the basis of the information collected by the automated tracking device.

 

Further reading

What should we tell workers about solely automated decision-making?

The right to be informed means you must tell workers whose information you are processing that you are doing so for solely automated decision-making. You must give them “meaningful information about the logic involved, as well as the significance and the envisaged consequences” of the processing for them. You must also tell them about this if they submit a SAR.

You must:

  • give workers information about the processing;
  • introduce simple ways for them to request human intervention or challenge a decision where the processing falls under Article 22; and
  • carry out regular checks to make sure your systems are working as intended.
Further reading

What is the role of human oversight?

When you use automated decision-making to make decisions with legal or similarly significant about workers, there is a risk that you might make them without appropriate human oversight. For example, you might reduce a worker’s pay if an automated system identifies poor performance. This infringes Article 22 of the UK GDPR. You should ensure that people assigned to provide human oversight remain engaged, critical and able to challenge the system’s outputs, wherever appropriate.

If you plan to use automated systems as a decision-supporting tool (which will therefore be outside the scope of Article 22), you should ensure that the people making the decision are:

  • involved in checking the system’s recommendation and should not just routinely apply the automated recommendation to workers;
  • actively involved and not just a token gesture. They should have ‘meaningful’ influence on the decision, including the ‘authority and competence’ to go against the recommendation; and

‘weighing-up’ and ‘interpreting’ the recommendation, considering all available input information, and also taking into account additional factors.

Checklist

□ If we use the personal information from monitoring workers for automated decision making (including profiling), we have checked that we comply with Article 22.

□ We offer alternatives to workers who ask for human intervention in decision making.

□ We do not disadvantage workers who ask for human intervention in decision making, compared to those who are subject to automated decision making.

□ Where we use automation with human involvement, we ensure the involvement is meaningful.

□ We carry out regular checks to make sure the systems are working as intended.

You can also view and print off this checklist and all the checklists of this guidance on our checklists page.