What do we need to do if we use monitoring tools that use solely automated processes?
In detail
- Why is this important?
- What do we mean by solely automated decision-making and profiling?
- What do we need to consider if we are planning to make solely automated decisions with legal or similar effect on workers?
- What should we tell workers about solely automated decision-making?
- What is the role of human oversight?
- Checklist
Why is this important?
Tools for monitoring workers have become increasingly sophisticated, with automated processes (sometimes known as people analytics) often used for:
- security purposes;
- managing workers’ performance; and
- monitoring sickness and attendance (including if a worker is away from their workstation).
There are business benefits to people analytics. They can contribute to improving organisational performance and can demonstrate compliance with HR policies. Such tools have the capacity to process large amounts of workers’ information by monitoring in real time. This can be used to make predictions, inferences and decisions about workers on both an individual and a collective level. The UK GDPR has provisions on solely automated decision-making with legal or similarly significant effects, including profiling. We cover them here in the context of monitoring workers.
What do we mean by solely automated decision-making and profiling?
Solely automated decision-making is a decision made by automated means without any meaningful human involvement. Solely automated decision-making may involve profiling too. In a work context, this could be where employers use workers’ information from a number of sources to make inferences about future behaviour or make decisions about them.
Solely automated decision-making and profiling could pose risks to the rights and freedoms of workers.
- For more information on what we mean by Artificial Intelligence, see the section ‘What do you mean by AI?’ from our guidance on AI and data protection.
- What is an AI output or an AI-assisted decision?
What do we need to consider if we are planning to make solely automated decisions with legal or similar effect on workers?
Article 22 of the UK GDPR restricts you from carrying out solely automated decision-making that has legal or similarly significant effects on people.
A legal effect is something that affects someone’s legal rights (eg a right to work). Similarly significant effects are more difficult to define, but are likely to include decisions that:
- significantly affect someone’s financial circumstance (eg increasing or decreasing a worker’s pay based on their performance at work); or
- affect a worker’s employment opportunities (eg dismissing someone).
You can only carry out this type of decision-making where the decision is:
- necessary for the entry into or performance of a contract with the person;
- authorised by law that applies to you (eg if you have a statutory or common law obligation to do something and automated decision-making is the most appropriate way to achieve your purpose); or
- based on a person’s explicit consent.
You must also ensure that you do not disadvantage workers who ask for human intervention in decision-making compared to those who are subject to automated decision-making.
Example - where Article 22 applies
An organisation pays workers based entirely on automated monitoring of their productivity. This decision is solely automated and has a significant effect, since it affects how much a worker is paid. Therefore, the additional rules under Article 22 apply.
Example – where Article 22 doesn’t apply
A courier service uses an automated vehicle tracking device to determine if its workers are making deliveries on time and to the correct address.
A worker is issued a warning about failing to make deliveries on time. The warning was based on complaints received from customers about not receiving their orders. These complaints were checked by the courier service’s HR manager who reviewed the vehicle’s tracking device data. This showed that the vehicle only made a small proportion of journeys it was expected to make. The manager also discussed the issue with the worker to ask about the delays and complaints before deciding to issue the warning.
Therefore, additional rules under Article 22 do not apply as the courier service’s HR manager took the decision to issue the warning after reviewing the information. This is the case even though the warning was issued on the basis of the information collected by the automated tracking device.
Further reading
- For more on issues encountered in AI decision-making, see our guidance: How do we ensure fairness in AI?
- For a deeper look at Article 22 in general, see: What is the impact of Article 22 of the UK GDPR on fairness?
- For further clarification on Article 22 and the use of automated decision-making, see: What does the UK GDPR say about automated decision-making and profiling?
- For more on the use of a lawful basis and its implementation in AI, see: How do we ensure lawfulness in AI?
- For more information on consent see our guidance on: consent.
What should we tell workers about solely automated decision-making?
The right to be informed means you must tell workers whose information you are processing that you are doing so for solely automated decision-making. You must give them “meaningful information about the logic involved, as well as the significance and the envisaged consequences” of the processing for them. You must also tell them about this if they submit a SAR.
You must:
- give workers information about the processing;
- introduce simple ways for them to request human intervention or challenge a decision where the processing falls under Article 22; and
- carry out regular checks to make sure your systems are working as intended.
What is the role of human oversight?
When you use automated decision-making to make decisions with legal or similarly significant about workers, there is a risk that you might make them without appropriate human oversight. For example, you might reduce a worker’s pay if an automated system identifies poor performance. This infringes Article 22 of the UK GDPR. You should ensure that people assigned to provide human oversight remain engaged, critical and able to challenge the system’s outputs, wherever appropriate.
If you plan to use automated systems as a decision-supporting tool (which will therefore be outside the scope of Article 22), you should ensure that the people making the decision are:
- involved in checking the system’s recommendation and should not just routinely apply the automated recommendation to workers;
- actively involved and not just a token gesture. They should have ‘meaningful’ influence on the decision, including the ‘authority and competence’ to go against the recommendation; and
‘weighing-up’ and ‘interpreting’ the recommendation, considering all available input information, and also taking into account additional factors.
Further reading
Checklist
□ If we use the personal information from monitoring workers for automated decision making (including profiling), we have checked that we comply with Article 22.
□ We offer alternatives to workers who ask for human intervention in decision making.
□ We do not disadvantage workers who ask for human intervention in decision making, compared to those who are subject to automated decision making.
□ Where we use automation with human involvement, we ensure the involvement is meaningful.
□ We carry out regular checks to make sure the systems are working as intended.
You can also view and print off this checklist and all the checklists of this guidance on our checklists page.