The ICO exists to empower you through information.

You have now answered all the questions in this toolkit. Below, you will find tailored advice based on the responses you provided. You may find it helpful to download a Word copy of this report in which you can make notes on the advice given to help you track your progress and record your decision-making. If you have an problem downloading the report into a word document please let us know.

You should not view the completion of this toolkit or completion of all recommended actions as verification of data protection compliance. Many aspects of data protection compliance are specific to the unique circumstances of the processing. As such, this toolkit and the advice provided is intended for use as a guide only.

The report should help you start to think about your data protection obligations but it should not be used as a substitute for consulting your data protection officer (DPO). If you have not consulted your DPO, you can use this report as a guide on what to ask them.

Remember, this toolkit is not exhaustive of every factor to be considered when implementing a data analytics system. You should ensure you consider other aspects, including equality and human rights law. Other tools, such as the ALGO-CARE framework, may assist you in this.


4 December 2023

Ensure you have documented clearly why you have identified the processing as ‘necessary’

If you believe that the processing is necessary for a law enforcement purpose, you should ensure that you clearly document your reasoning.  This should include a consideration of whether any less intrusive methods of processing could reasonably achieve the same aim, in order to meet the ‘necessity’ test.

The DPA defines law enforcement purposes as ‘the prevention, investigation detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security.’ 

We have published guidance on fair and lawful processing under Part 3 of the DPA18.

Identify and document your lawful basis for sensitive processing

In some cases, analysis of raw data can infer sensitive personal data. Recent advances in data analytics have made it easier for systems to detect patterns in the world that are reflected in seemingly unrelated data. In doing so, new personal data can be created.  You should be mindful of this, and ensure that appropriate safeguards are in place.

Data created by analytics systems will produce a statistically informed guess rather than a fact. You should ensure that it is recorded and treated as such.

When conducting analysis on personal data, you should consider the chances that your system might be inferring sensitive data in order to make predictions and actively monitor this possibility through the lifecycle of the system. If the potentially inferred characteristics are sensitive data, you should ensure that you have met the requirements outlined above.

Read ‘What if we accidentally infer special category data through our use of AI?’ in our guidance on AI and data protection for more information on inferred data.

Ensure your DPIA is regularly reviewed and kept up to date throughout the lifecycle of the project

A DPIA should be a ‘living document’ and should be kept under regular review.

If you are processing children’s data, we would encourage you to review your DPIA and consider whether there are any additional risks specific to children you may need to consider. Risks you may have already identified can be heightened in the context of processing children’s personal data. Children will be less aware of their information rights and how they can exercise them. Additionally, some risks arising from the processing might disproportionately impact children.

The ICO has published detailed guidance on DPIAs, including a checklist to help you review whether you have written a good DPIA.

Our guidance on AI and data protection contains specific guidance on DPIAs in the context of AI.

Continuously review the risks identified in your DPIA to ensure you have considered the impact of processing on the individuals concerned

There is no explicit definition of ‘high risk’ in the DPA18, but the various provisions on DPIAs make clear that this is about the risks to individuals’ rights and freedoms. This can refer to data subjects’ rights under data protection law, but also to fundamental rights and freedoms such as those protected by human rights and equality legislation.

The concept of potential harm or damage to individuals is linked to risk.

Examples of risks are where processing may lead to physical, material or non-material damage, in particular; where the processing may give rise to

  • discrimination, identity theft or fraud;
  • financial loss;
  • damage to the reputation;
  • loss of confidentiality of personal data protected by professional secrecy;
  • unauthorised reversal of pseudonymisation; or any other significant economic or social disadvantage.

You should consider whether you have identified and mitigated the above risks in your DPIA. If you have identified a high risk which you cannot mitigate, you are required to consult the ICO before you can begin the processing activity.

Our guidance on AI and data protection contains specific guidance on DPIAs in the context of AI.

Familiarise yourself with the responsibilities of your role as a controller, processor or joint controller

You should be mindful of the role and responsibilities of a controller, and ensure that you have a sufficient understanding of the technologies to be used in order to be able to exercise genuine control over the processing.

If you are a police force and a controller, you should be aware that your Chief Constable will be the controller and ultimately has the responsibility to ensure that personal data is processed in a compliant manner.

The importance of understanding controller/processor relationships is set out in our guidance on the AI auditing framework and is relevant to processing taking place in the context of data analytics.

Ensure that logging functionality is operational for your system before it is deployed

If you operate automated processing systems (any IT database), section 62 of the DPA18 requires you to keep logs for at least the following processing actions:

  • Collection
  • Alteration
  • Consultation
  • Disclosure (including transfers)
  • Combination
  • Erasure                        

These logs must also record the justification for, as well as the date and time of any access to or disclosure of the data. You must also log the identities of the persons who have consulted, disclosed or received the data.

There are however limitations to what you can use logs for. Any logs that you keep for the above processing actions may only be used for one or more of the following purposes:

  • to verify the lawfulness of processing;
  • to assist with self-monitoring by the controller or the processor, including the conduct of internal disciplinary proceedings;
  • to ensure the integrity and security of personal data; or
  • the purposes of criminal proceedings.


If an officer or member of police staff is suspected of inappropriately accessing your data analytics system to look at information relating to an acquaintance, the logging should show what was available to them at the time, which will assist with any potential internal investigations.

You, and any associated processors, are required to make these logs available to the Commissioner on request.

Review your contracts to ensure that they clearly delineate roles and responsibilities, and will enable you to meet your obligations under data protection law

Where any of these ongoing relationships exist, it is crucial that the controller maintains oversight of operations. Data breaches by third-party processors will have a direct impact on controllers. Even where a controller does not appear to be directly responsible for a particular incident, these types of breaches can erode public trust in the data analytics system – particularly in a law enforcement context.

You are responsible for ensuring your system for data analytics is compliant with data protection law, not the provider. If you procure the system from a third-party supplier that is off the shelf and does not contain inherent explainability, you may need another model alongside it.

Section 59 of the DPA18 sets out a number of terms that must be included in the contract between a controller and any processor.

Review your data sharing agreement to ensure that it clearly delineates roles and responsibilities and supports you in meeting your data protection obligations

The data sharing agreement must set out each controller’s respective responsibilities (for example, for personal data breaches), and must designate which controller is to be the contact point for data subjects.

You should make clear in the agreement how any data shared will be secured, and how any requests from data subjects to exercise their rights will be considered. You should also consider how accuracy will be maintained.

Any data sharing agreement should be reviewed and signed by your organisation’s DPO.

Consider the following advice on data protection by design and default

Controllers should consider that if the information of living persons is being used, even during development, then personal data is being processed. You should ensure that risks around this use of data are documented and mitigated. You may wish to consider the use of synthetic data in the development of your system.

For more information, read our guidance on data protection by design and default in our Guide to Law Enforcement Processing.

Read our guidance on the AI auditing framework on understanding design choices better, so you can design high-performing systems whilst still protecting the rights and freedoms of individuals.

Ensure that any competing interests identified are documented and monitored

Competing interests in a data analytics solution should be monitored throughout the project lifecycle to guard against drift and to manage associated risk. You may wish to include this information as part of your DPIA.

Some examples of competing interests and how to manage them are presented in the guidance on the AI auditing framework.

Ensure that risks around the creation of new personal data and any mitigating measures are documented in your DPIA

Your DPIA should identify any risks associated with data creation, and outline how you will meet your data protection responsibilities in respect of new data you have created. Key points to consider include transparency, accuracy and data subjects’ rights.

Consider the following additional advice on individual impact, and document any adverse impact identified

Aside from the direct impact of acting on a system’s recommendation, you should consider other, more subtle impacts. For example:

  • What is the long-term effect of a person becoming a data point in an analytics system?

  • Are they effectively tagged with certain characteristics moving forward?
  • What impact could this have on their life?

You should be clear what the purpose of the processing is, what the intended outcomes are and monitor the deployment of the system for any unintended consequences of the processing or function creep.

Considering the impact of the processing on an individual should form part of your assessment of the necessity and proportionality of processing. You should consider impact on individuals as part of your DPIA.

Ensure that measures are in place to identify, monitor and mitigate the risk of bias and discrimination throughout the project lifecycle

Bias may present itself at any stage of a project lifecycle; from initiation to design, testing through to deployment, sale or repurpose. Data protection law is intended to balance the right to the protection of personal data with its function in society.

Processing of personal data which leads to discrimination and bias will impact on the fairness of that processing. This poses compliance issues with the fairness principle as well as risks to individuals’ rights and freedoms – including the right to non-discrimination.

From the very beginning of any data analysis lifecycle, you should determine and document your approach to bias and discrimination mitigation. This is so that you identify and put in place the appropriate safeguards and technical measures during the design and build phase. You should satisfy yourself that the data used to train your system is representative of the population you are conducting data analysis on.

Measuring bias and discrimination is not a static measure. While you may measure on static test data during the design phase, in real life situations your solution will be applied to new and changing populations. Just because a system is not biased or discriminatory initially, does not mean that it will continue to be if there is a change in the characteristics of that population who the solution is applied to in future.

This phenomenon is sometimes referred to as ‘concept/model drift’, and various measures exist for detecting it. For instance, you can measure the distance between classification errors over time; increasingly frequent errors may suggest drift. You should regularly assess drift and adjust the algorithm according to new data where necessary.

For more information about addressing bias, read ‘How should we address risks of bias and discrimination?’ in our guidance on AI and data protection.

Review our guidance on security to help you ensure that you have reasonable assurance of the technical and organisational security measures in place

Compliance with data protection security requirements may be more challenging for data analysis systems than for older, more established technologies. For example, your use of data analytics may rely heavily on third-party code or relationships with suppliers. 

As well as the technical security of your data analytics system, you should consider which organisational security measures you will need to implement, such as system-specific staff training and internal guidance.

Read our guidance on security in the Guide to the GDPR, and the ICO/NCSC Security Outcomes, for general information about security under data protection law. Though our security guidance focuses on general processing rather than law enforcement, the principles remain relevant across both regimes.

Read our guidance on the AI auditing framework for more specific information about security of AI systems under data protection law.

Consider whether further measures can be implemented to reduce the amount of personal data collected, or to enhance privacy

Modern data analytics can uncover previously undetected patterns, which can tempt you to collect more data in order to gain additional insights. However, before you start processing any personal data you must be clear about your purposes for using data analytics and what personal data is necessary for you to achieve those purposes. 

There are a range of techniques for enhancing privacy which you can use to minimise the personal data being processed at the training phrase, including:

  • peturbation or adding ‘noise’;
  • synthetic data; and
  • federated learning.

Read our guidance on the AI auditing framework for more information. 

You have considered how you can fulfil requests from individuals to exercise their rights

We have recently produced guidance on how individual rights apply to different stages of the AI lifecycle as part of our guidance on AI and data protection.

You have considered how human intervention can be meaningful, but may find the following guidance useful

When using data analytics in a law enforcement setting, human intervention and explainability is very important. Decisions informed by systems using data analytics could have a significant impact on an individual, and may be scrutinised by a court.            

To ensure that human intervention is meaningful, you should:

  • consider the necessary system requirements to support a meaningful human review from the design phase, particularly the interpretability requirements and effective user-interface design to support human reviews and interventions;
  • design and deliver appropriate training and support for human reviewers; and
  • give staff the appropriate authority, incentives and support to address or escalate individuals’ concerns and, if necessary, override the AI system’s decision.

The above list is not exhaustive. You should read our guidance on explaining decisions made with AI on how, and to what extent, complex AI systems might affect your ability to provide meaningful explanations to individuals.

Back to top