The ICO exists to empower you through information.

At a glance

This section explains how AI systems can exacerbate known security risks and make them more difficult to manage. It also presents the challenges for compliance with the data minimisation principle. A number of techniques are presented to help both data minimisation and effective AI development and deployment

Who is this section for?

This section is aimed at technical specialists, who are best placed to assess the security of an AI system and what personal data is required. It will also be useful for those in compliance-focused roles to understand the risks associated with security and data minimisation in AI. 

In detail           

What security risks does AI introduce?

You must process personal data in a manner that ensures appropriate levels of security against its unauthorised or unlawful processing, accidental loss, destruction or damage. In this section we focus on the way AI can adversely affect security by making known risks worse and more challenging to control.

What are our security requirements?

There is no ‘one-size-fits-all’ approach to security. The appropriate security measures you should adopt depend on the level and type of risks that arise from specific processing activities.

Using AI to process any personal data has important implications for your security risk profile, and you need to assess and manage these carefully.

Some implications may be triggered by the introduction of new types of risks, eg adversarial attacks on machine learning models (see section ’What types of privacy attacks apply to AI models?’).

Further reading outside this guidance

Read our guidance on security in the Guide to the UK GDPR, and the ICO/NCSC Security Outcomes, for general information about security under data protection law.

Information security is a key component of our AI auditing framework but is also central to our work as the information rights regulator. The ICO is planning to expand its general security guidance to take into account the additional requirements set out in the new UK GDPR.

While this guidance will not be AI-specific, it will cover a range of topics that are relevant for organisations using AI, including software supply chain security and increasing use of open-source software.

What is different about security in AI compared to ‘traditional’ technologies?

Some of the unique characteristics of AI mean compliance with data protection law’s security requirements can be more challenging than with other, more established technologies, both from a technological and human perspective.

From a technological perspective, AI systems introduce new kinds of complexity not found in more traditional IT systems that you may be used to using. Depending on the circumstances, your use of AI systems is also likely to rely heavily on third party code relationships with suppliers, or both. Also, your existing systems need to be integrated with several other new and existing IT components, which are also intricately connected. Since AI systems operate as part of a larger chain of software components, data flows, organisational workflows and business processes, you should take a holistic approach to security. This complexity may make it more difficult to identify and manage some security risks, and may increase others, such as the risk of outages.

From a human perspective, the people involved in building and deploying AI systems are likely to have a wider range of backgrounds than usual, including traditional software engineering, systems administration, data scientists, statisticians, as well as domain experts.

Security practices and expectations may vary significantly, and for some there may be less understanding of broader security compliance requirements, as well as those of data protection law more specifically. Security of personal data may not always have been a key priority, especially if someone was previously building AI applications with non-personal data or in a research capacity.

Further complications arise because common practices about how to process personal data securely in data science and AI engineering are still under development. As part of your compliance with the security principle, you should ensure that you actively monitor and take into account the state-of-the-art security practices when using personal data in an AI context.

It is not possible to list all known security risks that might be exacerbated when you use AI to process personal data. The impact of AI on security depends on:

  • the way the technology is built and deployed;
  • the complexity of the organisation deploying it;
  • the strength and maturity of the existing risk management capabilities; and
  • the nature, scope, context and purposes of the processing of personal data by the AI system, and the risks posed to individuals as a result.

The following hypothetical scenarios are intended to raise awareness of some of the known security risks and challenges that AI can exacerbate. The following content contains some technical details, so understanding how it may apply to your organisation may require attention of staff in both compliance and technical roles.

Our key message is that you should review your risk management practices ensuring personal data is secure in an AI context.

How should we ensure training data is secure?

ML systems require large sets of training and testing data to be copied and imported from their original context of processing, shared and stored in a variety of formats and places, including with third parties. This can make them more difficult to keep track of and manage.

Your technical teams should record and document all movements and storing of personal data from one location to another. This will help you apply appropriate security risk controls and monitor their effectiveness. Clear audit trails are also necessary to satisfy accountability and documentation requirements.

In addition, you should delete any intermediate files containing personal data as soon as they are no longer required, eg compressed versions of files created to transfer data between systems.

Depending on the likelihood and severity of the risk to individuals, you may also need to apply de-identification techniques to training data before it is extracted from its source and shared internally or externally.

For example, you may need to remove certain features from the data, or apply privacy enhancing technologies (PETs), before sharing it with another organisation.

How should we ensure security of externally maintained software used to build AI systems?

Very few organisations build AI systems entirely in-house. In most cases, the design, building, and running of AI systems will be provided, at least in part, by third parties that you may not always have a contractual relationship with.

Even if you hire your own ML engineers, you may still rely significantly on third-party frameworks and code libraries. Many of the most popular ML development frameworks are open source.

Using third-party and open source code is a valid option. Developing all software components of an AI system from scratch requires a large investment of time and resources that many organisations cannot afford, and especially compared to open source tools, would not benefit from the rich ecosystem of contributors and services built up around existing frameworks.

However, one important drawback is that these standard ML frameworks often depend on other pieces of software being already installed on an IT system. To give a sense of the risks involved, a recent study found the most popular ML development frameworks include up to 887,000 lines of code and rely on 137 external dependencies. Therefore, implementing AI will require changes to an organisation’s software stack (and possibly hardware) that may introduce additional security risks.

Example

The recruiter hires an ML engineer to build the automated CV filtering system using a Python-based ML framework. The ML framework depends on a number of specialist open-source programming libraries, which needed to be downloaded on the recruiter’s IT system.

One of these libraries contains a software function to convert the raw training data into the format required to train the ML model. It is later discovered the function has a security vulnerability. Due to an unsafe default configuration, an attacker introduced and executed malicious code remotely on the system by disguising it as training data.

This is not a far-fetched example, in January of 2019, such a vulnerability was discovered in ‘NumPy’, a popular library for the Python programming language used by many machine learning developers.

What should we do in this situation?

Whether AI systems are built in-house, externally, or a combination of both, you will need to assess them for security risks. As well as ensuring the security of any code developed in-house, you need to assess the security of any externally maintained code and frameworks.

In many respects, the standard requirements for maintaining code and managing security risks will apply to AI applications. For example:

  • your external code security measures should include subscribing to security advisories to be notified of vulnerabilities; or
  • your internal code security measures should include adhering to coding standards and instituting source code review processes.

Whatever your approach, you should ensure that your staff have appropriate skills and knowledge to address these security risks.

Having a secure pipeline from development to deployment will further mitigate security risks associated with third party code by separating the ML development environment from the rest of your IT infrastructure where possible. Using ‘virtual machines’ or ‘containers’ - emulations of a computer system that run inside, but isolated from the rest of the IT system may help here; these can be pre-configured specifically for ML tasks. In addition, it is possible to train an ML model using a programming language and framework suitable for exploratory development, but then convert the model into another more secure format for deployment.

Further reading outside this guidance

Read our report on Protecting personal data in online services: learning from the mistakes of others (PDF) for more information. Although written in 2014, the report’s content in this area may still assist you.

The ICO is developing further security guidance, which will include additional recommendations for the oversight and review of externally maintained source code from a data protection perspective, as well as its implications for security and data protection by design.

National Cyber Security Centre (NCSC) guidance on maintaining code repositories.

What types of privacy attacks apply to AI models?

The personal data of the people who an AI system was trained on might be inadvertently revealed by the outputs of the system itself.

It is normally assumed that the personal data of the individuals whose data was used to train an AI system cannot be inferred by simply observing the predictions the system returns in response to new inputs. However, new types of privacy attacks on ML models suggest that this is sometimes possible.

In this section, we focus on two kinds of these privacy attacks – ‘model inversion’ and ‘membership inference’.

What are model inversion attacks?

In a model inversion attack, if attackers already have access to some personal data belonging to specific individuals included in the training data, they can infer further personal information about those same individuals by observing the inputs and outputs of the ML model. The information attackers can learn about goes beyond generic inferences about individuals with similar characteristics.

Example one – model inversion attack

An early demonstration of this kind of attack concerned a medical model designed to predict the correct dosage of an anticoagulant, using patient data including genetic biomarkers. It proved that an attacker with access to some demographic information about the individuals included in the training data could infer their genetic biomarkers from the model, despite not having access to the underlying training data.

Further reading outside this guidance

For further details of a model inversion attack, see ‘Algorithms that remember: model inversion attacks and data protection law

 

Example two – model inversion attack

Another recent example demonstrates that attackers could reconstruct images of faces that a Facial Recognition Technology (FRT) system has been trained to recognise. FRT systems are often designed to allow third parties to query the model. When the model is given the image of a person whose face it recognises, the model returns its best guess as to the name of the person, and the associated confidence rate.

Attackers could probe the model by submitting many different, randomly generated face images. By observing the names and the confidence scores returned by the model, they could reconstruct the face images associated with the individuals included in the training data. While the reconstructed face images were imperfect, researchers found that they could be matched (by human reviewers) to the individuals in the training data with 95% accuracy (see Figure 2.)

Figure 2. A face image recovered using model inversion attack (left) and corresponding training set image (right), from Fredriksen et al., 'Model Inversion Attacks that Exploit Confidence Information’.

What are membership inference attacks?

Membership inference attacks allow malicious actors to deduce whether a given individual was present in the training data of a ML model. However, unlike in model inversion, they don’t necessarily learn any additional personal data about the individual.

For example, if hospital records are used to train a model which predicts when a patient will be discharged, attackers could use that model in combination with other data about a particular individual (that they already have) to work out if they were part of the training data. This would not reveal any individual’s data from the training data set itself, but in practice it would reveal that they had visited one of the hospitals that generated the training data during the period the data was collected.

Similar to the earlier FRT example, membership inference attacks can exploit confidence scores provided alongside a model’s prediction. If an individual was in the training data, then the model will be disproportionately confident in a prediction about that person because it has seen them before. This allows the attacker to infer that the person was in the training data.

The gravity of the consequences of models’ vulnerability to membership inference will depend on how sensitive or revealing membership might be. If a model is trained on a large number of people drawn from the general population, then membership inference attacks pose less risk. But if the model is trained on a vulnerable or sensitive population (eg patients with dementia, or HIV), then merely revealing that someone is part of that population may be a serious privacy risk.

What are black box and white box attacks?

There is an important distinction between ‘black box’ and ‘white box’ attacks on models. These two approaches correspond to different operational models.

In white box attacks, the attacker has complete access to the model itself, and can inspect its underlying code and properties (although not the training data). For example, some AI providers give third parties an entire pre-trained model and allow them to run it locally. White box attacks enable additional information to be gathered, such as the type of model and parameters used, which could help an attacker in inferring personal data from the model.

In black box attacks, the attacker only has the ability to query the model and observe the relationships between inputs and outputs. For example, many AI providers enable third parties to access the functionality of an ML model online to send queries containing input data and receive the model’s response. The examples we have highlighted above are both black box attacks.

White and black box attacks can be performed by providers’ customers or anyone else with either authorised or unauthorised access to either the model itself, or its query or response functionality.

What about models that include training data by design?

Model inversion and membership inferences show that AI models can inadvertently contain personal data. You should also note that there are certain kinds of ML models which actually contain parts of the training data in its raw form within them by design. For example, ‘support vector machines’ (SVMs) and ‘k-nearest neighbours’ (KNN) models contain some of the training data in the model itself.

In these cases, if the training data is personal data, access to the model by itself means that the organisation purchasing the model will already have access to a subset of the personal data contained in the training data, without having to exert any further efforts. Providers of such ML models, and any third parties procuring them, should be aware that they may contain personal data in this way.

Unlike model inversion and membership inference, personal data contained in models like this is not an attack vector. Any personal data contained in these models would be there by design and easily retrievable by the third party. Storing and using these models therefore constitutes processing of personal data and as such, the standard data protection provisions apply.

Further reading outside this guidance

See scikit learn’s module on ‘Support Vector Machines’.

See scikit learn’s module on ‘K-nearest Neighbours’.

What steps should we take to manage the risks of privacy attacks on AI models?

If you train models and provide them to others, you should assess whether those models may contain personal data or are at risk of revealing it if attacked, and take appropriate steps to mitigate these risks.

You should assess whether the training data contains identified or identifiable personal data of individuals, either directly or by those who may have access to the model. You should assess the means that may be reasonably likely to be used, in light of the vulnerabilities described above. As this is a rapidly developing area, you should stay up-to-date with the state of the art in both methods of attack and mitigation.

Security and ML researchers are still working to understand what factors make ML models more or less vulnerable to these kinds of attacks, and how to design effective protections and mitigation strategies.

One possible cause of ML models being vulnerable to privacy attacks is known as ‘overfitting’. This is where the model pays too much attention to the details of the training data, effectively almost remembering particular examples from the training data rather than just the general patterns. Overfitting can happen where there are too many features included or where there are too few examples in the training data (or both). Model inversion and membership inference attacks can exploit this.

Avoiding overfitting will help, both in mitigating the risk of privacy attacks and also in ensuring that the model is able to make good inferences on new examples it hasn’t seen before. However, avoiding overfitting will not completely eliminate the risks. Even models which are not overfitted to the training data can still be vulnerable to privacy attacks.

In cases where confidence information provided by a ML system can be exploited, as in the FRT example above, the risk could be mitigated by not providing it to the end user. This would need to be balanced against the need for genuine end users to know whether or not to rely on its output and will depend on the particular use case and context.

If you are going to provide a whole model to others via an Application Programming Interface (API), you will not be subject to white box attacks in this way, because the API’s users will not have direct access to the model itself. However, you might still be subjected to black box attacks.

To mitigate this risk, you could monitor queries from the API’s users, in order to detect whether it is being used suspiciously. This may indicate a privacy attack and would require prompt investigation, and potential suspension or blocking of a particular user account. Such measures may become part of common real-time monitoring techniques used to protect against other security threats, such as ‘rate-limiting’ (reducing the number of queries that can be performed by a particular user in a given time limit).

If your model is going to be provided in whole to a third party, rather than being merely accessible to them via an API, then you will need to consider the risk of ‘white box’ attacks. As the model provider, you will be less easily able to monitor the model during deployment and thereby assess and mitigate the risk of privacy attacks on it.

However, you remain responsible for assessing and mitigating the risk that personal data used to train your models may be exposed as a result of the way your clients have deployed the model. You may not be able to fully assess this risk without collaborating with your clients to understand the particular deployment contexts and associated threat models.

As part of your procurement policy there should be sufficient information sharing between each party to perform your respective assessments as necessary. In some cases, ML model providers and clients will be joint controllers and therefore need to perform a joint risk assessment.

In cases where the model actually contains examples from the training data by default (as in SVMs and KNNs), this is a transfer of personal data, and you should treat it as such.

What about AI security risks raised by explainable AI?

Recent research has demonstrated how some proposed methods to make ML models explainable can unintentionally make it easier to conduct privacy attacks on models. For example, when providing an explanation to individuals, there may be a risk that doing so reveals proprietary information about how the AI model works. However, you must take care not to conflate commercial interests with data protection requirements (eg commercial security and data protection security), and instead you should consider the extent to which such a trade-off genuinely exists.

Given that the kind of explanations you may need to provide to data subjects about AI need to be ‘in a concise, transparent, intelligible and easily accessible form, using clear and plain language’, they will not normally risk commercially sensitive information. However, there may be cases where you need to consider the right of individuals to receive an explanation, and (for example) the interests of businesses to maintain trade secrets, noting that data protection compliance cannot be ‘traded away’.

Both of these risks are active areas of research, and their likelihood and severity are the subject of debate and investigation. We will continue to monitor and review these risks and may update this guidance accordingly.

Further reading outside this guidance

ICO and The Alan Turing Institute guidance on ‘Explaining decisions made with artificial intelligence’.

What about adversarial examples?

While the main data protection concerns about AI involve accidentally revealing personal data, there are other potential novel AI security risks, such as ‘adversarial examples’.

These are examples fed to an ML model, which have been deliberately modified so that they are reliably misclassified. These can be images which have been manipulated, or even real-world modifications such as stickers placed on the surface of the item. Examples include pictures of turtles which are classified as guns, or road signs with stickers on them, which a human would instantly recognise as a ‘STOP’, but an image recognition model does not.

While such adversarial examples are concerning from a security perspective, they might not raise data protection concerns if they don’t involve personal data. The security principle refers to security of the personal data – protecting it against unauthorised processing. However, adversarial attacks don’t necessarily involve unauthorised processing of personal data, only a compromise to the system.

However, there may be cases in which adversarial examples can be a risk to the rights and freedoms of individuals. For example, some attacks have been demonstrated on facial recognition systems. By slightly distorting the face image of one individual, an adversary can trick the facial recognition system into misclassifying them as another (even though a human would still recognise the distorted image as the correct individual). This would raise concerns about the system’s statistical accuracy, especially if the system is used to make legal or similarly significant decisions about individuals.

You may also need to consider the risk of adversarial examples as part of your obligations under the Network and Information Systems Regulations 2018 (NIS). The ICO is the competent authority for ‘relevant digital service providers’ under NIS. These include online search engines, online marketplaces and cloud computing services. A ‘NIS incident’ includes incidents which compromise the data stored by network and information systems and the related services they provide. This is likely to include AI cloud computing services. So, even if an adversarial attack does not involve personal data, it may still be a NIS incident and therefore within the ICO’s remit.

Further reading outside this guidance

Read our Guide to NIS.

For further information on adversarial attacks on facial recognition systems, see ‘Efficient decision-based black-box adversarial attacks on face recognition’.

What data minimisation and privacy-preserving techniques are available for AI systems?

What considerations about the data minimisation principle do we need to make?

The data minimisation principle requires you to identify the minimum amount of personal data you need to fulfil your purpose, and to only process that information, and no more. For example, Article 5(1)(c) of the UK GDPR says

‘1. Personal data shall be

adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed (data minimisation)’

However, AI systems generally require large amounts of data. At first glance it may therefore be difficult to see how AI systems can comply with the data minimisation principle, yet if you are using AI as part of your processing, you are still required to do so.

Whilst it may appear challenging, in practice this may not be the case. The data minimisation principle does not mean either ‘process no personal data’ or ‘if we process more, we’re going to break the law’. The key is that you only process the personal data you need for your purpose.

How you go about determining what is ‘adequate, relevant and limited’ is therefore going to be specific to your circumstances, and our existing guidance on data minimisation details the steps you should take.

In the context of AI systems, what is ‘adequate, relevant and limited’ is therefore also case specific. However, there are a number of techniques that you can adopt in order to develop AI systems that process only the data you need, while still remaining functional.

In this section, we explore some of the most relevant techniques for supervised Machine Learning (ML) systems, which are currently the most common type of AI in use.

Within your organisations, the individuals accountable for the risk management and compliance of AI systems need to be aware that such techniques exist and be able to discuss and assess different approaches with your technical staff. For example, the default approach of data scientists in designing and building AI systems might involve collecting and using as much data as possible, without thinking about ways they could achieve the same purposes with less data.

You must therefore implement risk management practices designed to ensure that data minimisation, and all relevant minimisation techniques, are fully considered from the design phase. Similarly, if you buy in AI systems or implement systems operated by third parties (or both), these considerations should form part of the procurement process due diligence.

You should also be aware that, while they may help you comply with the principle of data minimisation, the techniques described here do not eliminate other kinds of risk.

Also, while some techniques will not require any compromise to comply with data minimisation requirements, others may need you to balance data minimisation with other compliance or utility objectives. For example, making more statistically accurate and non-discriminatory ML models.

The first step you should take towards compliance with data minimisation is to understand and map out all the ML processes in which personal data might be used.

 

Further reading outside this guidance

Read our guidance on the data minimisation principle

How should we process personal data in supervised ML models?

Supervised ML algorithms can be trained to identify patterns and create models from datasets (‘training data’) which include past examples of the type of instances the model will be asked to classify or predict. Specifically, the training data contains both the ‘target’ variable (ie the thing that the model is aiming to predict or classify), and several ‘predictor’ variables (ie the input used to make the prediction).

For example, in the training data for a bank’s credit risk ML model, the predictor variables might include the age, income, occupation, and location of previous customers, while the target variable will be whether or not the customers repaid their loan.

Once trained, ML systems can then classify and make predictions based on new data containing examples that the system has never seen before. A query is sent to the ML model, containing the predictor variables for a new instance (eg a new customer’s age, income, occupation). The model responds with its best guess as to the target variable for this new instance (eg whether or not the new customer will default on a loan).

Supervised ML approaches therefore use data in two main phases:

  1. the training phase, when training data is used to develop models based on past examples; and
  2. the inference phase, when the model is used to make a prediction or classification about new instances.

If the model is used to make predictions or classifications about individual people, then it is very likely that personal data will be used at both the training and inference phases.

What techniques should we use to minimise personal data when designing ML applications?

When designing and building ML applications, data scientists will generally assume that all data used in training, testing and operating the system will be aggregated in a centralised way, and held in its full and original form by a single entity in multiple places throughout the AI system’s lifecycle.

However, where this is personal data, you need to consider whether it is necessary to process it for your purpose(s). If you can achieve the same outcome by processing less personal data then by definition, the data minimisation principle requires you to do so.

A number of techniques exist which can help you to minimise the amount of personal data you need to process.

How should we minimise personal data in the training stage?

As we have explained, the training phase involves applying a learning algorithm to a dataset containing a set of features for each individual which are used to generate the prediction or classification.

However, not all features included in a dataset will necessarily be relevant to your purpose. For example, not all financial and demographic features will be useful to predict credit risk. Therefore, you need to assess which features – and therefore what data – are relevant for your purpose, and only process that data.

There are a variety of standard feature selection methods used by data scientists to select features which will be useful for inclusion in a model. These methods are good practice in data science, but they also go some way towards meeting the data minimisation principle.

Also, as discussed in the ICO’s previous report on AI and Big Data, the fact that some data might later in the process be found to be useful for making predictions is not enough to establish why you need to keep it for this purpose, nor does it retroactively justify its collection, use, or retention. You must not collect personal data on the off-chance that it might be useful in the future, although you may be able to hold information for a foreseeable event that may not occur, but only if you are able to justify it.

How should we balance data minimisation and statistical accuracy?

In general, when an AI system learns from data (as is the case with ML models), the more data it is trained on, the more statistically accurate it will be. That is, the more likely it will capture any underlying, statistically useful relationships between the features in the datasets. As explained in the section on ‘What do we need to do about statistical accuracy?’, the fairness principle means that your AI system needs to be sufficiently statistically accurate for your purposes.

For example, a model for predicting future purchases based on customers’ purchase history would tend to be more statistically accurate the more customers are included in the training data. And any new features added to an existing dataset may be relevant to what the model is trying to predict. For example, purchase histories augmented with additional demographic data might further improve the statistical accuracy of the model.

However, generally speaking, the more data points collected about each person, and the more people whose data is included in the data set, the greater the risks to those individuals, even if the data is collected for a specific purpose. The principle of data minimisation requires you not to use more data than is necessary for your purposes. So if you can achieve sufficient accuracy with fewer data points or fewer individuals being included (or both), you should do so.

Further reading outside this guidance

Read our report on Big data, artificial intelligence, machine learning and data protection

What privacy-enhancing methods should we consider?

There are also a range of techniques for enhancing privacy which you can use to minimise the personal data being processed at the training phase, including:

  • perturbation or adding ‘noise’;
  • synthetic data; and
  • federated learning

Some of these techniques involve modifying the training data to reduce the extent to which it can be traced back to specific individuals, while retaining its use for the purposes of training well-performing models.

You can apply these types of privacy-enhancing techniques to the training data after you have already collected it. Where possible, however, you should apply them before collecting any personal data, as a part of mitigating the risks to individuals that large datasets can pose.

You can mathematically measure the effectiveness of these privacy-enhancing techniques in balancing the privacy of individuals and the utility of a ML system, using methods such as differential privacy.

Differential privacy is a way to measure whether a model created by an ML algorithm significantly depends on the data of any particular individual used to train it. While mathematically rigorous in theory, meaningfully implementing differential privacy in practice is still challenging.

You should monitor developments in these methods and assess whether they can provide meaningful data minimisation before attempting to implement them. They may not be appropriate or sufficiently mature to deploy in your particular context.

  • Perturbation

Modification could involve changing the values of data points belonging to individuals at random (known as ‘perturbing’ or adding ‘noise’ to the data) in a way that preserves some of the statistical properties of those features.

Generally speaking, you can choose how much noise to inject, with obvious consequences for how much you can still learn from the ‘noisy data’.

For example, smartphone predictive text systems are based on the words that users have previously typed. Rather than always collecting a user’s actual keystrokes, the system could be designed to create ‘noisy’ (ie false) words at random. This means it makes it substantially less certain which words were ‘noise’ and which words were actually typed by a specific user.

Although data would be less accurate at individual level, provided the system has enough users, you could still observe patterns, and use these to train your ML model at an aggregate level. The more noise you inject, the less you can learn from the data, but in some cases you may be able to inject sufficient noise to render the data pseudonymous in a way which provides a meaningful level of protection.

  • Synthetic data

In some cases, you may be able to develop models using ‘synthetic’ data. This is data which does not relate to real people, but has been generated artificially. To the extent that synthetic data cannot be related to identified or identifiable living individuals, it is not personal data and therefore data protection obligations do not apply when you process it.

However, you will generally need to process some real data in order to determine realistic parameters for the synthetic data. Where that real data can be related to identified or identifiable individuals, then the processing of such data must comply with data protection laws.

Furthermore, in some cases, it may be possible to infer information about the real data which was used to estimate those realistic parameters, by analysing the synthetic data. For example, if the real data contains a single individual who is unusually tall, rich, and old, and your synthetic data contains a similar individual (in order to make the overall dataset statistically realistic), it may be possible to infer that the individual was in the real dataset by analysing the synthetic dataset. Avoiding such re-identification may require you to change your synthetic data to the extent that it would be too unrealistic to be useful for machine learning purposes.

  • Federated learning

A related privacy-preserving technique is federated learning. This allows multiple different parties to train models on their own data (‘local’ models). They then combine some of the patterns that those models have identified (known as ‘gradients’) into a single, more accurate ‘global’ model, without having to share any training data with each other.

Federated learning is relatively new but has several large-scale applications. These include auto-correction and predictive text models across smartphones, but also for medical research involving analysis across multiple patient databases.

While sharing the gradient derived from a locally trained model presents a lower privacy risk than sharing the training data itself, a gradient can still reveal some personal information about the individuals it was derived from, especially if the model is complex with a lot of fine-grained variables. You therefore still need to assess the risk of re-identification. In the case of federated learning, participating organisations may be considered joint controllers even though they don’t have access to each other’s data.

Further reading inside this guidance

For more information on controllership in AI, read the section on controller/processor relationships.

 

Further reading outside this guidance

See ‘Rappor (randomised aggregatable privacy preserving ordinal responses)’ for an example of perturbation.

For an introduction to differential privacy, see ‘Differential privacy: an introduction for statistical agencies’.

How should we minimise personal data at the inference stage?

To make a prediction or classification about an individual, ML models usually require the full set of predictor variables for that person to be included in the query. As in the training phase, there are a number of techniques which you can use to minimise personal data, or mitigate risks posed to that data, at the inference stage, including:

  • converting personal data into less ‘human readable’ formats;
  • making inferences locally; and
  • privacy-preserving query approaches.

We consider these approaches below.

  • Converting personal data into less “human readable” formats

In many cases the process of converting data into a format that allows it to be classified by a model can go some way towards minimising it. Raw personal data will usually first have to be converted into a more abstract format for the purposes of prediction. For example, human-readable words are normally translated into a series of numbers (called a ‘feature vector’).

This means that if you deploy an AI model you may not need to process the human-interpretable version of the personal data contained in the query. For example, if the conversion happens on the user’s device.

However, the fact that it is no longer easily human-interpretable does not imply that the converted data is no longer personal. Consider Facial Recognition Technology (FRT), for example. In order for a facial recognition model to work, digital images of the faces being classified have to be converted into ‘faceprints’. These are mathematical representations of the geometric properties of the underlying faces (eg the distance between a person’s nose and upper lip).

Rather than sending facial images themselves to your servers, photos could be converted to faceprints directly on the individuals’ device which captures them before sending them to the model for querying. These faceprints would be less easily identifiable to any humans than face photos.

However, faceprints are still personal (indeed, biometric) data and therefore very much identifiable within the context of the specific facial recognition models that they are created for. Also, when used for the purposes of uniquely identifying an individual, they would be special category data under data protection law.

  • Making inferences locally

Another way to minimise the personal data involved in prediction is to host the ML model on the device from which the query is generated and which already collects and stores the individual’s personal data. For example, an ML model could be installed on the user’s own device and make inferences ‘locally’, rather than being hosted on a cloud server.

For example, models for predicting what news content a user might be interested in could be run locally on their smartphone. When the user opens the news app the day’s news is sent to the phone and the local model would select the most relevant stories to show to the user, based on the user personal habits or profile information which are tracked and stored on the device itself and are not shared with the content provider or app store.

The constraint is that ML models need to be sufficiently small and computationally efficient to run on the user’s own hardware. However, recent advances in purpose-built hardware for smartphones and embedded devices mean that this is an increasingly viable option.

It is important to note that local processing is not necessarily out of scope of data protection law. Even if the personal data involved in training is being processed on the user’s device, the organisation which creates and distributes the model is still a controller in so far as it determines the means and purposes of processing.

Similarly, if personal data on the user’s device is subsequently accessed by a third party, this activity would constitute ‘processing’ of that data.

  • Privacy-preserving query approaches

If it is not feasible to deploy the model locally, other privacy-enhancing techniques exist to minimise the data that is revealed in a query sent to a ML model. These allow one party to retrieve a prediction or classification without revealing all of this information to the party running the model; in simple terms, they allow you to get an answer without having to fully reveal the question.

Further reading outside this guidance

See ‘Privad: practical privacy in online advertising’ and ‘Targeted advertising on the handset: privacy and security challenges’  for proof of concept examples for making inferences locally.

See ‘TAPAS: trustworthy privacy-aware participatory sensing’ for an example of privacy-preserving query approaches.

Does anonymisation have a role?

There are conceptual and technical similarities between data minimisation and anonymisation. In some cases, applying privacy-preserving techniques means that certain data used in ML systems is rendered pseudonymous or anonymous.

However, you should note that pseudonymisation is essentially a security and risk reduction technique, and data protection law still applies to personal data that has undergone pseudonymisation. In contrast, ‘anonymous information’ means that the information in question is no longer personal data and data protection law does not apply to it.

What should we do about storing and limiting training data?

Sometimes it may be necessary to retain training data in order to re-train the model, for example when new modelling approaches become available and for debugging. However, where a model is established and unlikely to be re-trained or modified, the training data may no longer be needed. If the model is designed to use only the last 12 months’ worth of data, a data retention policy should specify that data older than 12 months be deleted.

Further reading outside this guidance

The European Union Agency for Cybersecurity (ENISA) has a number of publications about PETs, including research reports.