The ICO exists to empower you through information.

At a glance

  • PETs can help you demonstrate a ‘data protection by design and by default’ approach to your processing.
  • PETs can help you comply with the data minimisation principle. You can do this by only processing the information you need for your purposes.
  • PETs can also provide an appropriate level of security for your processing.
  • You can use PETs to give people access to datasets that would otherwise be too sensitive to share, while ensuring people’s information is protected.
  • Most PETs involve processing personal information. Your processing still needs to be lawful, fair and transparent.
  • You should identify the risks to people by performing a case-by-case assessment of the processing (eg through a data protection impact assessment (DPIA)). This will determine if PETs are appropriate to mitigate those risks.
  • You should not regard all PETs as a way to anonymise personal information. Not all PETs result in effective anonymisation, and you can achieve anonymisation without using them.

In detail

What are privacy-enhancing technologies (PETs)?

PETs are technologies that embody fundamental data protection principles by:

  • minimising personal information use (this covers the legal definition of personal data in the UK GDPR);
  • maximising information security; or
  • empowering people.

Data protection law does not define PETs. The concept covers many different technologies and techniques. The European Union Agency for Cybersecurity (ENISA) refers to PETs as:

‘Software and hardware solutions, ie systems encompassing technical processes, methods or knowledge to achieve specific privacy or data protection functionality or to protect against risks of privacy of an individual or a group of natural persons.’

How do PETs relate to data protection law?

PETs are linked to the concept of ‘data protection by design’ and are therefore relevant to the technical and organisational measures you put in place. They can help you implement the data protection principles effectively and integrate necessary safeguards into your processing.

PETs can help you demonstrate a ‘data protection by design and by default’ approach by:

  • complying with the data minimisation principle, by ensuring you only process the information you need for your purposes;
  • providing an appropriate level of security;
  • implementing robust anonymisation or pseudonymisation solutions; and
  • minimising the risk that arises from personal data breaches, by making the personal information unintelligible to anyone not authorised to access it.

What are the benefits of PETs?

PETs can help reduce the risk to people, while enabling you to further analyse the personal information. The ability to share, link and analyse personal information in this way can give you valuable insights while ensuring you comply with the data protection principles.

By using PETs, you can obtain insights from datasets without compromising the privacy of the people whose data is in the dataset. Appropriate PETs can make it possible to give access to datasets that would otherwise be too sensitive to share.

What are the risks of using PETs?

You should not regard PETs as a silver bullet to meet all of your data protection requirements. Your processing must still be lawful, fair and transparent. Before considering PETs, you should:

  • assess the impact of your processing;
  • be clear about your purpose;
  • understand and document how PETs can help you to comply with the data protection principles; and
  • understand and address the issues PETs may pose to complying with the data protection principles (eg issues with accuracy and accountability).

Lack of maturity

Some PETs may not be sufficiently mature in terms of their scalability, availability of standards and their robustness to attacks. We list some factors you should consider to assess the maturity of PETs in the section How do we determine the maturity of a PET?.

Lack of expertise

PETs can require significant expertise to set up and use appropriately. Insufficient expertise can lead to mistakes in implementation, and a poor understanding of how to configure the PET to deliver the appropriate balance of privacy and utility. If you do not have required expertise, then you should consider using an off-the-shelf product or service that provides an appropriate level of support.

Mistakes in implementation

With insufficient expertise comes the risk of inappropriate implementation (eg poor key management when using technologies underpinned by encryption). This might mean that the PETs are not actually protecting people in the way you intended. Therefore, there are unaddressed risks to those people's rights and freedoms. You should also monitor attacks and vulnerabilities regularly, to ensure that you can put appropriate mitigation measures in place.

Lack of appropriate organisational measures

A lack of appropriate organisational measures can lower or even completely undermine the effectiveness of a PET. Depending on the threat model, some PETs can assume that you are using a trusted third party (ie an organisation trusted not to act in a malicious or negligent manner). In this case, assurances are mainly derived from organisational controls, including legal obligations (such as contractual controls), monitoring and auditing processes.

What are the different types of PETs?

This guidance introduces some PETs that you can use to help you comply with your data protection by design obligations. They help you minimise the personal information you collect and integrate safeguards into the processing. Many aspects of PETs are also relevant for the public. However, this guidance focuses on PETs that organisations can use.

PETs that provide input privacy can significantly reduce the number of parties with access to personal information you are processing. Input privacy means that the party carrying out the processing cannot:

  • access the personal information you are processing;
  • access intermediate values or statistical results during processing (unless the value has been specifically selected for sharing); or
  • derive inputs by using techniques such as side-channel attacks that use observable changes during processing (eg query timings or power usage) to obtain the input.

These types of PETs can also help you comply with the security, purpose limitation, storage limitation and data minimisation principles of the UK GDPR.

PETs that provide output privacy reduce the risk that people can obtain or infer personal information from the result of a processing activity. Output privacy measures and controls reduce the risk that personal information can be obtained or inferred from the result of a processing activity. This is regardless of whether the computation itself provides input privacy. Using a PET that provides output privacy is useful if you plan to:

  • make anonymous statistics publicly available; or
  • share the results of an analysis with a large group of recipients.

These types of PETs also help you comply with the storage limitation and data minimisation principles of the UK GDPR.

The table below gives further information on how the PETs covered in this guidance provide input or output privacy. If you are able to fulfil your purposes in this way, you should consider combining PETs to ensure that the processing satisfies both input and output privacy.

 

PET Does the PET provide input privacy? Does the PET provide output privacy?
Homomorphic encryption (HE)

In some cases

If processing depends on the encrypted input of two or more parties, HE is not guaranteed to protect these inputs from the owner of the secret key.

No

The output may contain personal information.

It can be combined with output privacy approaches such as differential privacy.

Secure multiparty computation (SMPC) Yes

No

The output may contain personal information.

It can be combined with output privacy approaches such as differential privacy.

Zero-knowledge proofs (ZKPs) Yes

No

It may be possible to learn something about a person, depending on the nature of the query.

Federated learning (FL)

No

FL can be combined with other input privacy techniques, such as SMPC and HE.

No

FL does not prevent personal information from being shared through the output. It can be combined with output privacy approaches such as differential privacy.

Synthetic data (SD)

No

Using synthetic data does not address security risks when managing input data processed as part of the synthetic data generation process.

No

Synthetic data does not inherently provide output privacy, but it can be combined with output privacy approaches such as differential privacy.

Trusted Execution Environments (TEEs) Yes

No

TEEs can also deploy output privacy techniques, providing the code executed within them includes specific computations that provide those features.

Differential Privacy (DP)

No

DP does not address the security risks when processing input personal information which is highly identifiable (ie before noise is applied that makes it less identifiable).

Global DP can be combined with input privacy PETs, such as SMPC, to protect the input personal information between the input sources and the party adding the noise.

Yes

Several categories of PETs can help achieve data protection compliance, including ‘data protection by design and default’. These include PETs that:

  • reduce the identifiability of the people that the information you are processing is about. These can help you to fulfil the principle of data minimisation;
  • focus on hiding and shielding information. These can help you achieve the requirements of the security principle; and
  • split datasets. These can help you to fulfil both the data minimisation and security principles, depending on the nature of the processing.

PETs that derive or generate information that reduces or removes people’s identifiability

These aim to weaken or break the connection between someone in the original personal information and the derived information. Examples include:

  • differential privacy; and
  • synthetic data.

These PETs can effectively reduce risk to people. However, the resulting information may be less useful compared with the original information. This is because using these techniques can reduce how close the randomised answers to queries are compared to the real ones (ie those without “noise” applied). Noise randomly alters information in a dataset so that values such as people’s direct or indirect identifiers are harder to reveal. These results may not be suitable if you need the actual personal information about people or datasets with higher utility (ie which contain more useful information that you can extract).

Example

A hospital needs to ensure that patients with underlying health conditions receive appropriate treatment. To achieve this purpose, it must process their health information.

This means the hospital cannot use a PET that reduces the identifiability of the patients (eg synthetic data), as it cannot then make the right decisions about their treatment.

Separately, the hospital also shares information with researchers studying regional trends of COVID-19 cases. In this case, the hospital generates synthetic data for the researchers, possibly in combination with differential privacy to achieve effective anonymisation.

PETs that focus on hiding, or shielding, data

These aim to protect people’s privacy while not affecting the utility and accuracy of the information. For example:

  • homomorphic encryption - this allows computation to be performed on encrypted data without revealing the plaintext;
  • zero-knowledge proofs - these allow one party to prove to another party that something is true, without revealing what that something is or indeed anything else (such as the underlying data); and
  • trusted execution environments (TEEs) - these protect the information from external operating systems and applications.

PETs that split datasets

These PETs aim to minimise the amount of personal information shared and to ensure confidentiality and integrity, while not affecting the utility and accuracy of the information.

This group of technologies define how you collect, distribute, store, query, and secure personal information, and how each component of the system communicates with each other. They may split information for computation or storage or provide dedicated hardware to prevent the operating system or other application from accessing the personal information. This reduces the risk of information from different datasets being linked.

Examples include:

  • secure multi-party computation (SMPC), including private-set intersection (PSI); and
  • federated learning.

Are PETs anonymisation techniques?

PETs and anonymisation are separate but related concepts. Not all PETs result in effective anonymisation, and you could achieve anonymisation without using them.

At the same time, PETs can play a role in anonymisation, depending on the circumstances. For example, you could configure differential privacy methods to prevent information about specific people being revealed or inferences about them being made.

However, the purpose of many PETs is to enhance privacy and protect the personal information you process, rather than to anonymise it. This means that:

  • many PET use-cases still involve personal information; and
  • when you deploy such techniques, you still must meet your data protection obligations.

Further reading

See the sections of our draft anonymisation guidance on identifiability and pseudonymisation for more information.

When should we consider using PETs?

Whether a specific PET, or combination of PETs, is appropriate for your processing depends on your particular circumstances. You should consider implementing PETs at the design phase of your project, particularly for data-intensive projects that involve potentially risky uses of personal information. You must consider how you will comply with each of the data protection principles if you choose to use a PET.

If you are doing a data protection impact assessment (DPIA), and you have identified risks to people, then you should consider at this point whether PETs can mitigate those risks.

Which types of processing can benefit from using PETs?

PETs can help you reduce the risks to rights and freedoms that your processing may pose. For example, they can be suitable technical and organisational measures for the types of processing likely to result in a high risk to people. In particular, for processing that involves large-scale collection and analysis of personal information (eg artificial intelligence applications, Internet of Things and cloud computing services).

The table below is a non-exhaustive list of processing activities that may pose risks to people’s rights and freedoms, and how PETs can aid your compliance by mitigating these risks. If the processing is likely to result in a high risk to people, you must complete a DPIA. However, you do not need to consult us, if your DPIA identified a high risk to people but you are able to apply PETs to reduce the risk, so it is no longer high (residual risk).

Processing activity Possible risks to people PETs which may aid compliance
Processing involving artificial intelligence, machine learning, and deep learning applications Possible risks to people involved in the training dataset include model inversion, model inference and attribute inference attacks. These can reveal people’s identities, or may result in learning sensitive information about them.

PETs can help you assess and mitigate these risks. For example:

homomorphic encryption ensures that only parties with the decryption key can access the information. This protects the information that is being processed (eg to train the AI model);

SMPC can protect information sent to global model;

differential privacy adds random noise during training to ensure the final model does not memorise information unique to a particular person’s personal information;

federated learning can minimise the amount of centrally held personal information and reduce the transfer of personal information between parties; and

synthetic data can be used at the training stage to reduce the amount of personal information used to train artificial intelligence.

Processing activities involving AI may require you to complete a DPIA. For more information, see our DPIA guidance.

Processing involving data matching that means combining, comparing or matching personal information obtained from multiple sources
Eg sharing financial transactions to prevent fraud and money laundering
Possible risks to people include collecting more information than is required for the purposes and security threats during transfer of personal information.

PETs can help you assess and mitigate these risks. For example:

SMPC and PSI can minimise the information shared and protect it during computation

Processing activities involving matching information or combining datasets from different sources mean that you must complete a DPIA. For more information, see our DPIA guidance.

Processing involving IoT applications
Eg smart technologies (including wearables)
Possible risks to people include:
Collecting more information than required for the purposes
Security threats due to data breaches
Identifying people or learning about their activities through collection of sensitive attributes

PETs can help you assess and mitigate these risks. For example:

Federated learning can be used to train machine learning models on a large number of decentralised IoT devices (eg wearable devices, autonomous vehicles).

Depending on the circumstances of the processing, you can also use other PETs, such as SMPC, HE and DP, when you process personal information collected from IoT devices.

Processing activities involving IoT may require you to complete a DPIA (eg large-scale processing of health information from wearables). For more information, see our DPIA guidance.

Processing involving data sharing between organisations, particularly data sharing likely to result in a high risk to people Possible risks to people include sharing more information than the party you are sharing it with needs for their purposes and security threats (eg data breaches)

PETs can help you assess and mitigate these risks. For example:

SMPC, PSI and FL (when used with other PETs) can minimise the information transferred between parties.

HE can enhance security by preventing parties accessing the input information without affecting utility

You should carry out a DPIA for data sharing operations. You must do this if sharing it is likely to result in a high risk to people. See our data sharing code for further guidance.

Processing involving cloud computing applications Possible risks to individuals include increased risk of security threats from attackers due to performing computations in untrusted environments HE, TEEs and SMPC can be used for cloud computing processing tasks to provide enhanced security. 
Processing involving anonymisation of personal information  Re-identification of people in information that has not been effectively anonymised

DP can prevent people from being identified in published information or limit the amount of personal information released from queries.

You must ensure that the risk of re-identification is sufficiently remote. Read our draft guidance on anonymisation for further information.

How should we decide whether or not to use PETs?

If you are considering using PETs to address privacy risks, you should do a DPIA to understand how your use of the PET will impact your data processing. Your assessment must consider:

  • the nature, scope, context and purposes of your processing;
  • the risks your processing poses to people’s rights and freedoms;
  • whether you are using a PET to address a recognised data protection risk, and how it does so; and
  • the state-of-the-art and costs of implementation of any PETs.

The nature of the processing is what you plan to do with the personal information.

The scope of the processing is what the processing covers.

The context of the processing is the wider picture, including internal and external factors that might affect expectations or impact of the processing.

The purpose of the processing is the reason why you want to process the personal information.

You must consider the state-of-the-art to understand whether the PET is sufficiently mature for your purposes, and to check that you keep informed about the PETs available as the market changes. You are not required to implement the newest technologies available.

You must consider the cost of a technique as a factor in deciding which PET to implement, rather than a reason for not implementing any privacy-enhancing measure.

Further reading

See our DPIA guidance for more information on nature, scope, context and purpose of the processing.

For further guidance, read the section on data protection by design and security in our draft guidance on pseudonymisation.

How do we determine the maturity of a PET?

There are different ways to determine a PET’s maturity. Technology readiness levels (TRLs) are a common approach. These categorise PETs into discrete categories of maturity from conceptual to market-ready products. TRLs are based on actual usage, integration, and tests with existing systems and use cases.

Some models (eg ENISA’s PETs maturity assessment) combine TRLs with various quality measures including:

  • scalability;
  • quantified assumptions about how trustworthy the entities involved in the processing are;
  • security measures in place; and
  • versatility for different purposes.

These are used to generate a rating based on market maturity and the PET’s quality.

Other approaches to assessing PET suitability focus more on:

  • the protections the PET provides;
  • the risks of personal information leakage for a given threat model used; and
  • scalability and complexity issues.

Some PETs may be theoretical, immature or unscalable. These can be challenging to implement. Just because something exists at the cutting edge does not mean you have to implement it to comply with data protection law – particularly if it is not yet practical to do so.

Some PETs are newer or more theoretical than others, and standardisation can therefore be at its early stages. Where standards do exist, you should take them into account when designing and implementing data protection measures. You must ensure that appropriate technical and organisational measures are in place to mitigate against risks for a given threat model, as defined by relevant standards (eg ISO and IETF standards).

Standards can provide further detail and guidance about:

  • specific attacks and how these can be mitigated;
  • technical and organisational measures required for a given threat model (eg contractual controls and security measures such as access control); and
  • technical and organisational measures required to ensure the security properties are maintained (eg management of cryptographic keys and tuning and security parameters).

We have produced a table on the availability of industry standards for PETs.

Further reading – ICO guidance

Read our guidance on data protection by design and by default.

 

Further reading

For more information on methodologies for assessing the maturity of PETs, see guidance from the European Union Agency for Cybersecurity (ENISA), including:

For more information on PETs:

  • The Royal Society’s 2019 report Protecting privacy in practice (external link, PDF) also provides information about the current use, development and limits of PETs.
  • The Royal Society’s 2023 report “From privacy to partnership” (external link, PDF) also provides further information about PETs use cases and relevant standards.
  • The CDEI’s PETs adoption guide provides a question-based flowchart to aid decision-makers in thinking through which PETs may be useful in their projects.