The ICO exists to empower you through information.

At a glance

  • You must comply with data protection law when you use biometric data, as it is a type of personal data.
  • You must take a data protection by design approach when using biometric data.
  • You should do a DPIA before you use a biometric recognition system. This is because using special category biometric data is likely to result in a high risk.
  • Explicit consent is likely to be the only valid condition for processing available to you to process special category biometric data.
  • Other conditions may apply, but these will depend on the specifics of your proposal and your justification for using special category biometric data.
  • If you can’t identify a valid condition, you must not use special category biometric data.

In detail

You must comply with data protection law when you use biometric data as it is a type of personal data. This means you must demonstrate how you comply with the data protection principles.

You must adopt a data protection by design approach, This means you consider data protection and privacy issues upfront at the design stage and throughout the lifecycle of your system.  

Aside from anticipating any risks that might arise, you must:

  • protect biometric data in any system that you use; and
  • only use processors who can provide sufficient guarantees of the measures they will use for data protection by design.

When thinking about how your plans to use biometric data will comply with data protection law, you should ask yourself the following questions at the initial planning stage.

  • Will our use of biometric data be a targeted and effective way to meet our needs?
  • What alternatives to biometric data have we considered?
  • Could any of these reasonably meet our needs in a less intrusive way?

 

Do we need to do a DPIA?

You must complete a DPIA for any processing likely to result in a high risk to people’s rights and freedoms. It is highly likely that you will trigger this requirement by using any biometric recognition system.

This is because data protection law says that you must do a DPIA if you plan to:

  • process special category data on a large scale; or
  • undertake systematic monitoring of a publicly accessible area on a large scale.

Most uses of biometric recognition systems involve one of these criteria.

Even if your system won’t trigger these criteria, you must do a DPIA if your processing matches one of the scenarios on our published list of high risk processing operations.

This includes several scenarios where biometric data is used for the purpose of uniquely identifying someone.

And, even if you don’t use special category biometric data, you may assess that your proposal to use biometric data is still likely to result in high risk, given the context and purpose.

You must consider the likelihood and potential impact of specific risks that may occur, and the potential for harm that may result.

To do this effectively, you should ensure that you understand how the system works and what its capabilities are. You may require specialist expertise, including from any providers you’re considering.

You should also consider whether the system you intend to use involves  privacy enhancing technologies (PETs) or whether you can deploy these alongside it. PETs can support you in meeting your data protection obligations. For example, by limiting the amount of personal data you use, or by providing appropriate technical and organisational measures to protect it.

Who is the controller for our biometric recognition system?

Your use of a biometric system may involve several different organisations.

You must be clear about:

  • when you are a controller (with the system provider as your processor); and
  • whether at any stage you might be a joint controller with another organisation.

If you and another organisation are acting joint controllers, both of you must ensure that people are able to exercise their rights and understand what you are doing with their information.

You may have contracts with organisations who act as your processor. You must specify in your contract that the processor should only use the biometric data collected under your instruction.

If the processor uses this biometric data outside of your instruction, it will be using this data for its own purposes. In this case, a processor would be liable for any harm resulting from this processing, including potential regulatory action.

It can be common for providers of AI solutions to want to use the data generated by customers to further develop their models.

You should:

  • establish whether any provider you intend to use wants to do this;
  • confirm that the system provider would be acting as a controller for this particular use of data; and
  • establish how you or the provider would inform people about the use of their data for these purposes.

This could include taking steps to amend your contract or choosing a different provider altogether.

You should regularly review any services you outsource and be able to modify or switch them to another provider if their use is no longer compliant in your circumstances.

Do we need explicit consent when we process special category biometric data?

In most cases, explicit consent is likely to be the only valid condition for processing special category biometric data.

Where there is an imbalance of power between you and the person, you should carefully consider whether relying on explicit consent is appropriate.

This is because anyone who depends on your services, or fears adverse consequences if they refuse, may feel they have no choice but to agree. This means people may not freely give their consent. This is particularly an issue for public authorities and employers.

You must offer a suitable alternative to people who choose not to consent and ensure they do not feel under pressure to consent.

Example

A gym chain plans to change their swipe card access control system to one that uses facial recognition technology. The gym wants to do this to enhance the customer experience by making it easier and quicker for people to access their facilities.

The new system will use biometric recognition to process their customers’ biometric data for the purpose of uniquely identifying them. The gym must identify a valid condition for processing this special category data.

Given its intended objective, the gym can only rely on explicit consent for the processing, as no other conditions in Article 9 apply. To ensure the consent it obtains is valid, the gym asks its customers to indicate their agreement by completing a clear and specific statement about the processing.

It also gives customers the option of entering a unique PIN code if they don’t want their biometric data processed. Their biometric data will not be used at any stage.

The gym documents this in its DPIA, describing the non-biometric option as an alternative offered to customers, if they prefer not to have their biometric data used. The DPIA also details their wider compliance with data protection law, and how they have taken a data protection by design approach to the planning and roll-out of the new system.

What other conditions might apply for biometric recognition?

You may be able to identify an alternative Article 9 condition to rely on, if explicit consent isn’t appropriate in your circumstances.

Whatever condition you consider, you must ensure that you can meet all of its requirements. If you are unsure, you should seek independent legal advice.

Prevention and detection of unlawful acts

This condition applies if:

  • you need to use biometric data for crime prevention or detection purposes; and
  • asking for people’s consent means you wouldn’t achieve those purposes.

You must be able to show that using special category biometric data is “necessary” both for the prevention and detection of crime and for reasons of substantial public interest.

To satisfy this condition, you should demonstrate you are using  biometric data in a targeted and proportionate way to deliver the specific purposes set out in the condition, and that you cannot achieve them in a less intrusive way.

You must also have an appropriate policy document in place at the time your processing starts.

Research

This condition applies if you intend to use special category biometric data for one of the research purposes.

You must be able to show that using special category biometric data is “necessary” for the research purpose. This means that your use of special category biometric data is a reasonable and proportionate way to achieve your purpose.

In order to rely on this condition, you must also comply with further safeguards, including demonstrating that your use of special category biometric data is:

  • not likely to cause someone substantial damage or substantial distress; and
  • in the public interest.

Example

A company uses a dataset of biometric profiles in order to assess and address the risks of discrimination in their AI system. The dataset comprises of a diverse range of ages and genders.

Some of the dataset is used to train the model so that it learns from a diverse range of inputs.

The remaining data is used to test the performance of the model and confirm comparable accuracy for all ages and genders. This will help in avoiding potential discriminatory effects when the company launches the system.

 

What if we don’t have consent and no other condition applies?

If you cannot gain explicit consent, and no other condition is appropriate, then you will infringe data protection law if you process special category biometric data. This is because your processing is unlawful.

The first data protection principle requires any processing of personal data to be fair, lawful and transparent.

If you cannot identify a valid condition, then you won’t be able to comply with this principle. You must therefore consider other options to achieve your purpose and must not use a biometric recognition system.

Can we use a biometric recognition system to make automated decisions about someone?

Many uses of biometric recognition systems inherently involve making solely-automated decisions about people.

Depending on the specifics of your deployment, these decisions may have legal or similarly significant effects on those people (eg denial of a service).

Data protection law restricts the circumstances in which you can make these sorts of decisions. These include specifying the:

  • conditions you can rely on; and
  • safeguards you must have in place (eg the ability for a human review of any decision).

To determine whether your biometric recognition system makes these kinds of decisions, you should ask the following:

  • What decisions do you intend the system to make?
  • Who (or what) determines these decisions?
  • Is the decision solely automated, or is there any meaningful human involvement?
  • What are the potential impacts of the decisions on someone? Do the decisions affect people’s legal rights or have a similarly significant effect on their circumstances or choices?

 

What else do we need to consider?  

Using biometric recognition systems can raise several potential risks, including:

  • accuracy – where the system generates errors because it does not correctly identify people;
  • discrimination – where people or groups are treated unjustly on the ground of protected characteristics; and
  • security – where unauthorised people can access the biometric data, or the system can be tricked (spoofed) into allowing access when it shouldn’t.

If you don’t address these risks, you could contravene data protection law and other equalities legislation. This may leave you exposed to further legal claims, as well as regulatory action.

How do we deal with accuracy risks?

Biometric recognition systems use probabilistic matching to determine whether two values are ‘sufficiently similar’ to each other. Their ability to make accurate matches is a measure of their statistical accuracy in correctly assigning the observed value or input data to the actual stored data by making statistically informed guesses.

Example

Traditional verification methods (eg a password) make a simple comparison between an input value (what you type) and a stored value (the password). If the input exactly matches the stored value, then access is granted.

This is a binary outcome – either the input value will match the stored password, or it won’t.

Biometric recognition systems work in a different way.

Although their objective is the same, there are a range of factors which mean that no two captures of biometric data can be truly identical in the same way as a password.

Differences in atmospheric conditions (such as the amount of light or glare) may mean that an input value (ie an image of a face) doesn’t precisely match the stored value (ie an image of their face when they first enrolled on the system).

Other issues can complicate the matching process, such as the passage of time between the original enrolment and the later re-presentation.

Probabilistic matching processes introduce the potential for the following types of errors:

  • false positive errors (also known as type I errors) occur when a system incorrectly observes a case as positive when it shouldn’t (ie a match is suggested, but the image does not match the person); and
  • false negative errors (also known as type II errors) occur when a system incorrectly observes a case as negative when it should be positive (ie the image does match the person, but the system did not recognise them).

You should ensure that you use well-developed systems that minimise the number of errors that could occur, even if they can’t eliminate them entirely.

Before you deploy a biometric recognition system, you should understand the potential implications of these sorts of errors. You should consider this in terms of the possible impact on both the people who will rely on the system, as well as your organisation as a whole.

The nature of error rates means that some solutions may be more appropriate for some uses than others. You should assure yourself of the statistical accuracy of any solution, including where any published information on the performance of a solution came from (ie lab testing versus ‘real-life’ scenarios similar to yours).

You should also understand whether you can configure your system locally (eg in order to optimise performance and reduce errors to an acceptable level).

Further reading

Guidance on AI and data protection – including the specific section on statistical accuracy and the trade-offs involved in considering terms like the precision and recall of AI systems.

 

How do we deal with risks of discrimination?

If you intend to use a biometric recognition system, then as part of complying with the fairness principle, you must assess whether it is likely to have a discriminatory impact on people.

Biometric recognition systems have the potential to discriminate in a number of ways.

Like other technologies which use AI, biometric technologies are susceptible to bias and discrimination. This is because systems designed to detect physical characteristics (eg fingerprints) need to manage a far greater range of variables than if they are detecting a uniform object (eg a swipe card).

Some people may be unable to interact with a biometric solution (ie a fingerprint scanner) due to physical disability. This could mean that they are at a significant disadvantage when accessing a specific service, benefit or product, compared to those who can use the biometric solution.

Differences exist between people and groups. If your system detects the characteristics of certain groups less well than others, it is likely to have a biased outcome. This could mean that it discriminates against a particular group.

If this happens, you are likely to find it challenging to demonstrate how your system complies with the fairness principle.

Example

Fingerprint recognition is less accurate for adults over 70 and children under 12. This is because older adults’ fingerprints are less distinct and young children’s fingerprints are still developing so they change rapidly.

As a result, a technology that may superficially be considered to be fair can have unfair impacts, as it will systematically perform worse for older adults and young children.

How do we deal with security risks?

Biometric data includes some of the most sensitive information about people. Some are key, intrinsic features that can’t be changed easily (ie facial features, eye shape, the sound of someone’s voice).

You must apply appropriate security measures when you use biometric data. You should determine these by carrying out a risk analysis that considers:

  • the circumstances of your processing and the likely security threats you may face;
  • the damage or distress that may be caused if the biometric data is compromised; and
  • what forms of attack your system might be vulnerable to.

You must also conduct regular testing and reviews of your security measures to ensure they remain effective.

You must also encrypt any biometric data that you use.