The ICO exists to empower you through information.

Controller/Processor – next steps

We have guidance on controller/processor relationships and contract/liabilities in the Guide to the GDPR, which will help you understand your responsibilities.

For discussions about the importance of understanding controller and processor relationships, read understanding controller/processor relationships in our guidance on the AI auditing framework.

More information about what explaining AI means for your organisation can be found in our guidance on explaining AI decisions.

Conducting a DPIA –  next steps

Carrying out a DPIA is the responsibility of the controller, not the DPO. However, the DPO can play a very important and useful role in assisting the controller. If the controller disagrees with the DPO’s advice, the DPIA documentation should specifically justify, in writing, why the advice has not been taken into account.

In addition to conducting a DPIA, you may also be required to undertake other kinds of impact assessments, or do so voluntarily. For instance, public sector organisations must undertake equality impact assessments, while other organisations voluntarily undertake ‘algorithm impact assessments’ or ‘ethical impact assessments’. There is no reason why you cannot combine these exercises, so long as the assessment encompasses all the requirements of a DPIA.

We have published more information about how to conduct a DPIA.

High risk in DPIA – next steps

If you have carried out a DPIA that identifies a high risk, which you cannot sufficiently mitigate, you must consult the ICO. You cannot go ahead with the processing until you have done so.

The focus is on the ‘residual risk’ after taking any mitigating measures. If your DPIA identified a high risk, but you have taken sufficient measures to mitigate it so that it is no longer a high risk, you do not need to consult the ICO.

The Information Commissioner will respond within six weeks. This timescale may increase by a further month, depending on the complexity of the processing you intend to carry out. The ICO will inform the controller, and where applicable, the processor, of any such extension within one month of receipt of the request for consultation together with the reasons for the delay.

An organisation specialising in car insurance is considering using data analytics in order to provide customised insurance premiums to their customers. They identify that the processing is likely to result in a high risk to individuals and decide to complete a DPIA.

During their DPIA, the organisation identifies a risk where customers are given quotes based on what sex they are, rather than their driving competence. The organisation recognise that this is a high risk because it gives rise to discrimination but are unable to come up with any ways to sufficiently mitigate this risk. Therefore, they must consult the ICO before they start processing.

Read our list of processing operations ‘likely to result in high risk’ for examples of operations that require a DPIA, and further detail on which criteria are high risk in combination with others.

Data protection by design – next steps

For more information, read our guide on data protection by design and default in the Guide to the GDPR.

Read our guidance on the AI auditing framework on better understanding design choices, so you can design high-performing systems whilst still protecting the rights and freedoms of individuals.

Competing interests – next steps

If you are using data analytics, you need to identify and assess these interests and strike an appropriate balance between them given your context, whilst continuing to meet your legal obligations. You should also document the decisions you make.

Privacy vs statistical accuracy

If you are using AI in your data analytics solution, you will find that, in general, the more data your AI system is trained on, the more statistically accurate it will be. That is, the more likely it will be to capture any underlying, statistically useful relationships between the features in the datasets.

However generally speaking, the more data points collected about each person, and the more people whose data is included in the data set, the greater the risks to those individuals.

Some examples of competing interests and how to manage them are presented in the guidance on the AI auditing framework.

Lawful bases – next steps

You should determine your lawful basis before you begin processing, and you should document it as well as including a justification for why you believe it applies. This will help you comply with accountability obligations, and will also help you when writing your privacy notices.

To find out more about the six options and which one will be most appropriate for your processing, read our guidance on lawful basis for processing.

Consent –  next steps

For consent to apply, individuals must have a genuine choice about whether or not you can use their data. This may have implications depending on what you intend to do with the data – it can be difficult to ensure you collect valid consent for more complicated processing operations. For example, the more things you want to do with the data, the more difficult it is to ensure that consent is genuinely specific and informed.

An online news service decides to use a data analytics solution to provide their users with more personalised content. Before they use the solution to process personal data, they ask their users if they would like to use this new service. They provide transparent information about what personal information they will process and for what purposes. They also inform their users that if they refuse to provide consent, then they can still use all the other services on the site. Users can also withdraw consent they have previously given, just as easily as they provided it in the first case. Withdrawing consent also means all of their data is removed that was used as part of the new service. In this case, consent is likely to be valid.

Read our guidance on consent in the Guide to the GDPR for more information.

Contract – next steps

Read our guidance on contracts as a lawful basis in the Guide to the GDPR for more information.

Legal obligation, public task, vital interests next steps

Read our guidance on legal obligation, public task and vital interests in the Guide to the GDPR for more information.

Legitimate interests – next steps

It is important to note that while the legitimate interests lawful basis is the most flexible lawful basis for processing, it is not always the most appropriate. It also means you are taking on additional responsibility for considering and protecting people’s rights and interests.

There are three elements to the legitimate interests basis. It helps to think of this as a three-part test. You need to:

  • Identify a legitimate interest (the ‘purpose test’).
  • Show that the processing is necessary to achieve it (the ‘necessity test’); and
  • Balance it against the individual’s interests, rights and freedoms (the ‘balancing test’).

You should address and document these considerations as part of your legitimate interests assessment (LIA).

Read our guidance on legitimate interests in the Guide to the GDPR.

We have also published a lawful basis assessment tool which you can use to help you decide which basis is appropriate for you, as well as a legitimate interests assessment template.

No lawful basis identified – next steps

You must have a valid lawful basis in order to process personal data. You must determine your lawful basis before you begin processing, and you should document it.

There are six available lawful bases for processing. No single basis is ‘better’ or more important than the others – which basis is most appropriate to use will depend on your purpose and relationship with the individual. To find out more about the six options and which one will be most appropriate for your processing, read our guidance on lawful basis for processing.

We also have an interactive tool to help you decide which lawful basis is likely to be the most appropriate.

Special category data – next steps

To process special category data, you need a lawful basis under Article 6, as well as a separate condition under Article 9, although these do not have to be linked. You should document both your lawful basis for processing and your special category condition so that you can demonstrate compliance and accountability

You can find more information in our guidance on special category data.

In some cases, the use of data analysis on raw data can infer special category data. Recent advances in data analytics have made it easier for systems to detect patterns in the world that are reflected in seemingly unrelated data.

When conducting data analysis on personal data, you should proactively assess the chances that your system might be inferring special category data to make predictions, and actively monitor this possibility through the lifecycle of the system. If the potentially inferred characteristics are special category data, you should ensure that you have an appropriate Article 9 condition for processing.

Read our guidance on the AI auditing framework for more information.

Solely automated decisions or profiling – next steps

Read our guidance on rights related to automated decision-making including profiling in the Guide to the GDPR.

Read the ICO’s and The Alan Turing Institute’s guidance on ‘Explaining decisions made with AI’ for information on how to provide meaningful information about the logic involved in an AI system.

Statistical accuracy – next steps

See our guidance on what you need to do about statistical accuracy in the guidance on the AI auditing framework.

Preventing bias – next steps

You should determine and document your approach to bias and discrimination mitigation from the very beginning of any data analysis lifecycle, so that you can take into account and put in place the appropriate safeguards and technical measures during the design and build phase.

You should be satisfied that the data is representative of the population you are conducting data analysis on. For example, for a high street bank operating in the UK, the training data could be compared against the most recent Census.

For more information about addressing bias, read our guidance on the AI auditing framework.

Monitoring bias – next steps

Failing to monitor bias can lead to ‘concept/model drift’, and various measures exist for detecting it. For instance, you can measure the distance between classification errors over time; increasingly frequent errors may suggest drift. You should regularly assess drift and adjust the algorithm according to new data where necessary.

Read our guidance on the AI auditing framework for more information.

Privacy notice – next steps

Read the ICO’s and the Alan Turing Institute’s guidance on explaining AI decisions for more information about explainability.

Security – next steps

Read our guidance on security in the Guide to the GDPR, and the ICO/National Cyber Security Centre (NCSC) Security Outcomes, for general information about security under data protection law.

Read our guidance on the AI auditing framework for more specific information about security of AI systems under data protection law.

Data minimisation – next steps

Read our guidance on the data minimisation principle in our guide to GDPR.

You must be clear before you start processing any personal data what your purposes for using data analytics are and what personal data is necessary for you to achieve those purposes.

There are a range of techniques for enhancing privacy which you can use to minimise the personal data being processed at the training phase, including:

  • perturbation or adding ‘noise’; and
  • federated learning.

Read our guidance on the AI auditing framework for more information.

Individual rights – next steps                                             

Read our guidance on individual rights in the Guide to the GDPR and on how individual rights apply to different stages of the AI lifecycle in the guidance on the AI auditing framework.

Individual rights and automated decisions or profiling – next steps

Read the ICO’s and the Alan Turing Institute’s guidance on explaining decisions made with AI for practical advice on explaining the processes, services and decisions delivered or assisted by AI, to the affected individuals.

Meaningful human intervention – next steps

To ensure that human intervention is meaningful, you should:

  • consider the necessary system requirements to support a meaningful human review from the design phase. Particularly, the interpretability requirements and effective user-interface design to support human reviews and interventions;
  • design and deliver appropriate training and support for human reviewers; and
  • give staff the appropriate authority, incentives and support to address or escalate individuals’ concerns and, if necessary, override the AI system’s decision.

Read the ICO’s and the Alan Turing Institute’s guidance on explaining decisions made with AI on how, and to what extent, complex data analytics might affect your ability to provide meaningful explanations to individuals. The guidance will also be useful for you if you are undertaking less complex data analytics.