Automated decision-making and profiling for recruitment and selection
Our consultation on this draft guidance is now closed. The final version will be published in due course.
In detail
- What is the scope of this chapter?
- How can we use solely or partly automated decision-making and profiling for recruitment purposes?
- What are the risks of using solely or partly automated decision-making and profiling for recruitment?
- How can we address the risks of using solely or partly automated decision-making and profiling for recruitment?
- What do we need to consider when using partly automated decision-making and profiling for recruitment?
- What do we need to consider when using solely automated decision-making and profiling for recruitment?
- What do we need to tell candidates about solely automated decision-making and profiling?
- Do people have the right to challenge the decision?
- Can our use of third-party AI solutions for recruitment purposes affect controllership?
- What else do we need to consider?
What is the scope of this chapter?
Solely automated decision-making refers to decisions fully made by automated means, without meaningful human involvement. It often involves profiling too. Profiling analyses aspects of a candidate’s personality, behaviour, interests and habits to make decisions about them. In a recruitment context, this can mean using candidates’ information from a number of sources to make inferences about potential future behaviour or make decisions about whether they are suitable for a particular role.
In this guidance, any reference we make to ‘automated decision-making and profiling’ or ‘automated decisions and profiling’ means automated decision-making for recruitment purposes which may or may not include, or be based on, profiling.
Data protection law restricts the use of solely automated decision-making and profiling that has legal or similarly significant effects on people. This includes a decision about whether to shortlist a candidate, recommend them for interview, reject them or promote them. In this guidance, any reference we make to ‘recruitment decisions’ means decisions about candidates that have legal or similarly significant effects on them.
We explain the circumstances in which you may be able to use solely automated decision-making and profiling to make recruitment decisions in the section What do we need to consider when using solely automated decision-making and profiling for recruitment?
You can use automated systems to assist you with recruitment decisions, provided that they are not solely automated, and there has been meaningful human involvement in the decision. We refer to this as ‘partly automated decision-making and profiling’ throughout this guidance. We explain what we mean by ‘meaningful human involvement’ in the section What do we need to consider when using partly automated decision-making and profiling for recruitment?
Further reading
How can we use solely or partly automated decision-making and profiling for recruitment purposes?
Solely or partly automated decisions and profiling can be made using Artificial Intelligence (AI) and its subset, machine learning. These two terms are not defined in data protection law but generally mean methods of simulating human intelligence processes by machines, so that machines can perform tasks which usually require human intelligence.
Organisations are increasingly relying on solely or partly automated decision-making and profiling, which may involve using AI, at various stages of the recruitment process. This is often seen as a more cost-effective way to screen applications, particularly where organisations receive a high volume of applications. It may help improve the efficiency of the recruitment process, but only if it is used responsibly.
For example, you can use solely or partly automated decision-making and profiling to:
- advertise using algorithms; or
- screen CVs using AI (eg to remove potential points of bias, such as location or ethnic background).
However, you must use these methods in a way that complies with data protection law.
Further reading
What are the risks of using solely or partly automated decision-making and profiling for recruitment?
Solely and partly automated decision-making and profiling presents risks to candidates’ rights and freedoms, including their information rights. It can result in unfair discrimination or impact their labour rights. For example:
- algorithms may target certain candidates in unfair or discriminatory ways on the basis of protected characteristics;
- profiling is often invisible to candidates;
- candidates may not expect their information to be used in this way;
- controllers might not fully understand how the process works, or be able to clearly explain the process to candidates who are affected;
- candidates might not fully understand how the process works or how it affects them, even if this is explained to them;
- the decisions may lead to significant adverse effects for some candidates; and
- there is always likely to be a margin of error or a risk of inaccuracy.
Solely or partly automated decision-making processes may also introduce biases or inaccuracies that can lead to unfair results. This may include software that:
- excludes candidates who live a certain distance away from the place of work, even if they intend to move to the area;
- incorrectly eliminates candidates who meet the job criteria;
- incorrectly decides that a person has no legal right to work in the UK; or
- eliminates candidates with gaps in their CV even though this was because the candidate had a serious illness.
Example
A call centre decides to invest in AI for the purpose of recruiting call handlers. The software it uses has been trained using test CVs from mostly male applicants. As a result, it tends to approve applications from males and reject applications from similarly qualified females. As the software discriminates against people, the call centre is not using candidates’ information fairly.
An organisation may not intend to discriminate against people. However, this may be an unintended consequence of its use of an AI system and how that system develops. As controller, you are responsible for the processing, and for ensuring that your AI system only uses information in ways you plan for or expect.
Partly automated decision-making and profiling can have similar risks as solely automated methods, although meaningful human involvement can work as a safeguard, and may reduce these risks. However, you must consider the risks of using any solely or partly automated systems for making recruitment decisions by doing a DPIA.
Further reading
How can we address the risks of using solely or partly automated decision-making and profiling for recruitment?
You must do a DPIA if you plan to use solely or partly automated decision-making and profiling for recruitment purposes as both activities are high risk. In particular, you must:
- consider whether the automated method is necessary and proportionate in the circumstances;
- consider whether you can use less privacy-intrusive alternatives instead;
- be selective about when to use these methods and to what extent;
- ensure your software does not introduce biases, in particular those that target or discriminate against candidates based on their protected characteristics;
- ensure you don’t discriminate against someone on the basis of their special category information; and
- carefully monitor and assess the operation of any software you plan to use.
You must manage risks of bias and discrimination in any system you use and be able to mitigate the risks before you use automated decision-making and profiling for recruitment purposes. You should take measures to:
- prevent candidates from being discriminated against on the basis of their special category information;
- ensure that you are able to correct inaccuracies and minimise errors; and
- keep the information secure.
Even if you have not developed the software yourself, you must still understand the data protection implications of its use, and whether it presents a risk to candidates. You must cover this in your DPIA. If necessary, you should obtain technical or legal advice about software you plan to use to ensure that it complies with data protection law. If you cannot mitigate the risk, you should not use the software.
Example
A financial services organisation runs an annual scheme to recruit graduates. Due to its popularity, it’s not practical for the organisation to sift applications without using an automated process. The organisation uses software designed to eliminate irrelevant applications and make a shortlist for the organisation to consider. The organisation is confident that the software is fit for purpose and operates with a high level of statistical accuracy. However, as there is a small margin of error, it must have safeguards in place, in the event that the software incorrectly eliminates someone.
If anyone meets the criteria and is not selected, then the automated system has failed, as it was designed to only eliminate irrelevant applications. As a safeguard, the organisation puts in place an appeal process for candidates who meet the minimum criteria but have not been selected. It also informs candidates that they are to follow this process if they meet the criteria but have not been shortlisted.
You must also make reasonable adjustments for people who have a disability. Where there is a risk of bias or unfair treatment, you must use alternatives to automated processing, or mitigate these risks.
Further reading
What do we need to consider when using partly automated decision-making and profiling for recruitment?
You may want to rely on partly automated decisions and profiling for recruitment purposes, for example by using AI to assist you in reaching your decisions. In this case, you must build meaningful human involvement into each stage of the process in which recruitment decisions are being made about candidates. This means that any decisions about whether to progress a candidate to the next stage are made by a human. You should ensure that:
- a human reviews any solely automated outputs that you may use to determine whether a candidate is selected or eliminated from the recruitment process;
- a human has the power to disagree with the AI recommendations or predictions, and can overturn them;
- where there are a number of candidates with similar qualifications and experience, the decision about who to interview is made by a human, although you can consider the recommendations made by the software;
- you don’t attach disproportionate weight to the AI recommendations; and
- you have trained staff on how to consider AI-driven or solely automated decisions, without attaching undue weight to them, and they are able to reach their own conclusions.
Example
An IT company receives a large volume of applications for a software developer vacancy. Candidates sit an aptitude test as part of the recruitment process, which shows whether they can use specific programs and methodologies. The test requires them to answer a number of multiple choice questions, and it is marked using a fully automated system. The scores are automatically attributed to the candidates based on the number of correct answers. The system then ranks candidates according to their scores. Each candidate is also required to attend an interview.
Good practice: the company attaches some weight to the test scores for each candidate but does not eliminate anyone on the basis of their test scores alone. It only takes the test scores into account as part of a wider recruitment exercise, but these are not decisive. Each candidate is also required to participate in a group exercise and attend an interview, both scored by a human. The overall decision about whether to recruit someone is made by a human, taking into account all available information about their performance (including the test scores).
Bad practice: the company uses the test scores to eliminate half of the candidates, without a human having considered their overall performance. This is unlawful. However, the company may only do this if they are able to rely on an exception to carry out the solely automated processing.
If a human has no power to overturn the AI recommendations, the recruitment decision has been made by solely automated means, and there has been no meaningful human involvement. This is the case even if the human has reviewed the information.
You should keep a record of each time a human reviewer overrides an automated recruitment decision. This will help you evaluate both the system you use and the effectiveness of the human involvement. Remember that human reviewers can potentially introduce unwanted bias into the recruitment system through their choices, and you should keep both under review.
Further reading
What do we need to consider when using solely automated decision-making and profiling for recruitment?
Candidates have the right not to be subject to solely automated decision making and profiling for recruitment purposes as this may have legal or similarly significant effects on them. If a decision is made based ‘solely’ on automated processing (whether or not this also includes profiling), this means that there has been no meaningful human involvement in the decision-making process.
Data protection law restricts you from making solely automated decisions, including those based on profiling, that have a legal or similarly significant effect on candidates, unless you can rely on an exception.
Therefore, you must not shortlist candidates using solely automated decision-making and profiling unless you’re able to rely on an exception and you have safeguards in place. There are three exceptions you can use if the decision is:
- necessary for entering into or performance of a contract between you and the candidate (see the section on using contract as a lawful basis for recruitment);
- authorised by law; or
- based on the candidate’s explicit consent (see the earlier section on explicit consent).
In addition to relying on any one of these exceptions, you must also have suitable safeguards in place to protect the person’s rights, freedoms, and interests. You must at the very least include the right for people to:
- obtain human intervention;
- express their point of view; and
- challenge the decision.
If you’re using special category personal information you can only use solely automated decision-making and profiling if you can rely on one of the exceptions listed above and:
- you also have the candidate’s explicit consent; or
- the processing is necessary for reasons of substantial public interest.
In either case, you must have safeguards in place to protect the candidate’s rights, freedoms and legitimate interests.
Further reading
- Automated decision-making and profiling – When can we carry out this type of processing?
- See earlier section Substantial public interest
- What are the substantial public interest conditions?
- What is explicit consent?
- What are the appropriate safeguards – Fairness in the AI lifecycle
- See the earlier sections of this guidance, What lawful bases might apply if we want to process candidates’ information? for further information about contract and Explicit consent
What do we need to tell candidates about solely automated decision-making and profiling?
If you are using solely automated decision-making and profiling to make recruitment decisions about candidates, there are additional rules about what to tell them. You must provide meaningful details about the logic involved and the significance and likely consequences for the candidate. This means that you should explain to candidates:
- what information you will ask them to provide and why this is relevant to the recruitment process;
- how the system uses their information, including how it makes decisions about them (eg the criteria it uses to select or reject them);
- how you will store their information and for how long;
- what risks there are and how you will mitigate these risks (eg by regularly testing the software to ensure the methods remain fair, effective and non-discriminatory);
- the level of any human involvement in the decision-making; and
- their right not to be subject to solely automated decision-making.
For example, if you are confident in the statistical accuracy of your software, but there is a margin of error, you should explain:
- what the risks are and how this can impact the recruitment process;
- what safeguards you have in place to reduce these risks; and
- how people can challenge a decision and request human intervention.
You may have concerns that your explanations of automated decision-making processes may disclose commercially sensitive material about how your system works. However, you must provide people with an explanation about how your automated system uses their information. You are not required to disclose your source code or any algorithmic trade secrets. But it's unlikely that providing this information creates such a risk. If you consider that it’s necessary to limit certain details (eg feature weightings or importance), you should be able to justify and document your reasons for this.
You must have in place straightforward ways for people to request human intervention or challenge a decision where the processing falls within Article 22.
Do people have a right to challenge the decision?
People have the right to challenge any solely automated recruitment decision which significantly affects them and request human intervention. You should have simple ways in place to allow them to do this. When you decide to make solely automated decisions (which may include profiling), you must consider a person’s ability to understand and contest those decisions. Remember that each candidate has the right not to be subject to solely automated decision-making and processing for recruitment purposes. This means that software which demonstrates sufficient statistical accuracy may not be appropriate to use for recruitment purposes in the absence of safeguards.
Example
A financial services organisation consistently receives high volumes of applications for graduate roles every year. It purchases AI software to help it eliminate candidates who do not meet the minimum criteria specified in the job application.
After making enquiries about the accuracy of the software, the organisation is reassured that the system is 99.9% accurate. The organisation receives 20,000 applications from candidates. Based on the statistics, the software is likely to eliminate 20 suitable candidates from the process. However, the organisation considers this to be an acceptable error rate, provided it has suitable safeguards in place and ways for candidates to challenge the decision.
Each of these candidates has a right to challenge the solely automated decision. The organisation must apply safeguards to ensure there is meaningful human involvement so that they can use the software lawfully, fairly, and transparently.
In doing so, the organisation must be able to identify when the system has made an error. It can do this by informing candidates that it is using AI software to shortlist them and explaining what they will use the software for. By informing candidates that the software only removes those who do not meet the criteria, any of the candidates can challenge the decision on the basis that they do meet the criteria.
The organisation must have processes in place to enable candidates to easily challenge the decision and a process which allows a human to review the AI decision.
You should record and monitor the number of challenges made by candidates on the grounds of fairness. This helps you assess how effectively your AI software enables you to comply with data protection law. You should address unfair outcomes in a timely way. In the context of recruitment, you must be able address the error quickly enough so that you do not unfairly eliminate the impacted candidates from the recruitment process.
Further reading
Can our use of third-party AI service providers for recruitment purposes affect controllership?
Yes. If you decide to use someone else’s AI systems for your recruitment purposes, you must be clear about their role. This is because the development and deployment of AI systems which process personal information often involves multiple organisations. For example, it is possible for an AI service provider to be a controller or joint controller for some AI processing phases and a processor for others.
The decision to use an AI service provider is one you take as a controller. Being aware of the status of your service providers will help you to establish accountability - both from the outset of any AI processing activities in recruitment and during the course of its use. It will also enable everyone to understand their respective obligations under data protection law.
When using AI service providers for recruitment purposes, you should consider the types of decisions which may impact their status as a controller for each processing activity. These may include decisions about:
- the source and nature of information you use to train an AI model (both before its use and throughout its development);
- what is trying to be predicted or classified through your use of an AI model; or
- how the AI model you use will be continuously tested and updated (both before its use and throughout its development).
Your AI service provider can also make some technical decisions about how they process candidate information for you and still be a processor.
Depending on the terms of their contract with you, these decisions may include:
- how machine learning algorithms are specifically implemented (eg the programming language they are written in);
- how candidate information and AI models are stored;
- the measures used to optimise learning algorithms and models; and
- how AI models will be deployed in practice.
If your AI service provider processes personal information for any other reason than what you’ve instructed them to do, then they are a controller or joint controller for this processing.
Further reading
- What are the accountability and governance implications of AI?
- For further examples, please see indicative example scenarios.
You should regularly review any AI services you have outsourced to a third party for recruitment and selection purposes and be able to modify them or switch to another service provider, if required.
You must conduct a DPIA for your use of AI in recruitment as this is likely to result in a high risk to the rights and freedoms of candidates. You should clearly document in your DPIA the relationships between you and your AI service providers, alongside the roles and obligations of each party. You should also cover who you share personal information with and when (eg via a data flow diagram or process map).
If you decide that you and your AI service provider are joint controllers, you should collaborate in the DPIA process as necessary.
Where your supplier is a processor, they must assist you in completing the DPIA.
Further reading
- Broader guidance on controller and processor relationships in the context of AI can be found in our main Guidance on AI and data protection and more specifically in the section: How should we understand controller / processor relationships in AI?
- Controller and processors
- Data Protection Impact Assessments (DPIAs)
- Data protection by design and default
What else do we need to consider?
As controller, you are responsible for the software you use to process personal information, even if you did not build the software or create the algorithm. When you plan to use software, it is your responsibility to ask how the system operates, and to ensure that it uses personal information in a way that complies with data protection law. For example, you could ask for information on:
- the demographic groups a model was trained on;
- whether any underlying bias has been detected or may emerge; and
- any algorithmic fairness testing that has been conducted.
You should regularly review your AI system and have measures in place to check for bias or discrimination. In particular, you should regularly review the efficiency and algorithmic fairness of the software for people with protected characteristics and special category information.
If your software has been trained on particular information, it may be less statistically accurate if circumstances change.
Example
An organisation’s head office is based in a particular town in the UK. The organisation uses AI software to conduct interviews. As most of the candidates are local, the software has been trained using primarily local accents. It demonstrates a preference for local accents, but is still sufficiently statistically accurate, provided there are safeguards in place. This includes reviewing the decisions of AI where bias is likely or has been identified.
The organisation’s working model has changed in recent years and it decides to recruit remote workers based in other parts of the UK and worldwide. The bias in favour of particular accents has reduced the statistical accuracy of the AI to a level that is no longer adequate or fair. This means that the organisation should consider retiring the software, as it is no longer fit for purpose. Alternatively, it could retrain the software with new representative data.
Further reading