The ICO exists to empower you through information.

You have now answered all the questions in this toolkit. Below, you will find tailored advice based on the responses you provided. You may find it helpful to download a Word copy of this report in which you can make notes on the advice given to help you track your progress and record your decision-making. If you have an problem downloading the report into a word document please let us know.

You should not view the completion of this toolkit or completion of all recommended actions as verification of data protection compliance. Many aspects of data protection compliance are specific to the unique circumstances of the processing. As such, this toolkit and the advice provided is intended for use as a guide only.

The report should help you start to think about your data protection obligations but it should not be used as a substitute for consulting your data protection officer (DPO). If you have not consulted your DPO, you can use this report as a guide on what to ask them.

Remember, this toolkit is not exhaustive of every factor to be considered when implementing a data analytics system. You should ensure you consider other aspects, including equality and human rights law. Other tools, such as the ALGO-CARE framework, may assist you in this.


7 May 2024

Ensure you have documented clearly why you have identified the processing as ‘necessary’

If you believe that the processing is necessary for a law enforcement purpose, you should ensure that you clearly document your reasoning.  This should include a consideration of whether any less intrusive methods of processing could reasonably achieve the same aim, in order to meet the ‘necessity’ test.

The DPA defines law enforcement purposes as ‘the prevention, investigation detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security.’ 

We have published guidance on fair and lawful processing under Part 3 of the DPA18.

Identify and document your lawful basis for sensitive processing

Processing the categories of data you have selected constitutes ‘sensitive processing’ as defined by the DPA18, and is therefore subject to additional safeguards. In order for sensitive processing to be lawful, you must meet one of the following two conditions:

  • The data subject has consented to the processing or
  • The processing is strictly necessary for the law enforcement purpose and meets at least one of the conditions listed in Schedule 8 of the DPA18.

‘Strictly necessary’ in this context means that the processing must relate to a pressing social need, and cannot reasonably be achieved through less intrusive means. You will not meet this requirement if you can achieve the purpose for processing through other reasonable means.

In order to undertake sensitive processing, you must have an appropriate policy document in place. This document must explain:

(a) your procedures for ensuring compliance with the law enforcement data protection principles; and

(b) your policies on the retention and erasure of this data.

You must retain this policy from the time you begin sensitive processing until six months after it has ended, undertake regular reviews and keep it up to date. You must also make it available to the Information Commissioner upon request.

You can find more information about sensitive processing in our Guide to Law Enforcement Processing.

Identify and document your lawful basis for sensitive processing

In some cases, analysis of raw data can infer sensitive personal data. Recent advances in data analytics have made it easier for systems to detect patterns in the world that are reflected in seemingly unrelated data. In doing so, new personal data can be created.  You should be mindful of this, and ensure that appropriate safeguards are in place.

Data created by analytics systems will produce a statistically informed guess rather than a fact. You should ensure that it is recorded and treated as such.

When conducting analysis on personal data, you should consider the chances that your system might be inferring sensitive data in order to make predictions and actively monitor this possibility through the lifecycle of the system. If the potentially inferred characteristics are sensitive data, you should ensure that you have met the requirements outlined above.

Read ‘What if we accidentally infer special category data through our use of AI?’ in our guidance on AI and data protection for more information on inferred data.

You must seek advice on this project from your organisation’s DPO

DPOs play a core role in an organisation’s compliance with data protection legislation. Their tasks include informing and advising on obligations under data protection law, providing advice on data protection impact assessments and monitoring compliance.

Controllers are required by law to involve their DPO in a meaningful and timely fashion in any project related to the protection of personal data. You should therefore ensure that your DPO is involved in any data analytics project from the beginning. Doing so will help you to demonstrate accountability, and will assist you in implementing data protection by design and default.

Our Guide to Law Enforcement Processing contains further information about the role of the DPO.

Conduct a DPIA

Part 3 of the DPA18 states that where a type of processing is likely to result in a high risk to the rights and freedoms of individuals, the controller must complete a DPIA prior to the processing beginning.

Processing that is likely to result in a high risk includes (but is not limited to):

  • systematic and extensive processing activities, including profiling and where decisions that have legal effects, or similarly significant effects, on individuals;
  • large scale processing of special categories of data (or ‘sensitive data’ when processing for a law enforcement purpose) or personal data relation to criminal convictions or offences;
  • using new technologies (for example surveillance systems).

Therefore, if you are considering using data analytics, then you are required to carry out a DPIA.

A DPIA must contain:

  • at least a general description of your processing operations and the purposes;
  • an assessment of the risks to the rights and freedoms of individuals;
  • the measures envisaged to address those risks;
  • the safeguards, security measures and mechanisms in place to ensure you protect the personal data; and
  • a demonstration of how you are complying with Part 3 of the Act, taking into account the rights and legitimate interests of the data subjects and any other people concerned.

There is no explicit definition of ‘risk’ in the DPA18, but the various provisions on DPIAs make clear that this is about the risks to individuals’ interests. The concept of potential harm or damage to individuals’ links to risk. Examples of risks are where processing may lead to physical, material or non-material damage, in particular; where the processing may give rise to discrimination, identity theft or fraud, financial loss, damage to the reputation, loss of confidentiality of personal data protected by professional secrecy, unauthorised reversal of pseudonymisation, or any other significant economic or social disadvantage.


A police force is considering using a data analytics system in order to detect incidences of domestic abuse. The system will analyse police data and produce outputs, which predicts  whether a person is a victim or perpetrator of domestic abuse. This prediction will be used to inform police decisions on intervening in situations where domestic abuse is suspected, and therefore could have a very significant impact on people’s lives.

If inaccurate data is fed into this system, a person could be erroneously marked as a perpetrator of domestic abuse, leading to unnecessary and potentially distressing intervention.

Similarly, erroneous or incomplete data could lead to the system failing to detect and safeguard victims of abuse with potentially life-threatening consequences.

Further risks can arise which are inherent to the use of an analytics system. Where human biases are present in the data used to train the system, the system may extrapolate these resulting in discrimination against certain groups.

It is therefore clear that the risks to the individuals affected by this processing activity are high, and a DPIA is required.

You must consider the nature, scope and context of the processing when assessing the risk to individuals’ rights and freedoms. In particular, you should consider that risks you have already identified may be heightened in the context of processing children’s personal data. Children will be less aware of their information rights and how they can exercise them. Additionally, some risks arising from the processing might disproportionately impact children.


The same police force is considering using their data analytics system to identify children who are at risk of domestic violence. The risks differ from those outlined in our previous example; where a child is the victim, they are less able to speak out or seek help for themselves. If the data analytics system fails to identify them as possibly requiring intervention, there could be a serious risk of harm to that child.

The opposite is also true. If a child were incorrectly identified as a potential victim, they and their family could be subject to unnecessary intervention causing harm and distress.

Part of the role of the DPO is to provide advice on carrying out DPIAs. If you are unsure whether a DPIA is needed, you should seek input from your DPO.

Completing a DPIA is a good opportunity to evidence your accountability, and is a useful tool in implementing data protection by design and default.

We have published more information about how to conduct a DPIA. Our guidance on AI and data protection contains specific guidance on DPIAs in the context of AI.

Continuously review the risks identified in your DPIA to ensure you have considered the impact of processing on the individuals concerned

There is no explicit definition of ‘high risk’ in the DPA18, but the various provisions on DPIAs make clear that this is about the risks to individuals’ rights and freedoms. This can refer to data subjects’ rights under data protection law, but also to fundamental rights and freedoms such as those protected by human rights and equality legislation.

The concept of potential harm or damage to individuals is linked to risk.

Examples of risks are where processing may lead to physical, material or non-material damage, in particular; where the processing may give rise to

  • discrimination, identity theft or fraud;
  • financial loss;
  • damage to the reputation;
  • loss of confidentiality of personal data protected by professional secrecy;
  • unauthorised reversal of pseudonymisation; or any other significant economic or social disadvantage.

You should consider whether you have identified and mitigated the above risks in your DPIA. If you have identified a high risk which you cannot mitigate, you are required to consult the ICO before you can begin the processing activity.

Our guidance on AI and data protection contains specific guidance on DPIAs in the context of AI.

Determine and document your role (as a processor, controller or joint controller) and responsibilities in relation to the processing activity

In many cases, the various processing operations involved in data analytics may be undertaken by a number of different organisations. It is therefore crucial that you determine who is a controller, joint controller or processor if multiple organisations are involved in your use of data analytics. 

If you are making decisions about how and why personal data is to be processed, you are likely to be a controller. If you are doing so jointly with one or more competent authorities, you will likely be joint controllers. If you are acting under instruction from a competent authority and do not in any way decide the manner or means of processing, you are likely to be a processor.

You can find more information on controllers, joint controllers and processors in our Guide to Law Enforcement Processing.

If you are controller who is a police force, you should be aware that your Chief Constable will be the controller and ultimately has the responsibility to ensure that personal data is processed in a compliant manner.

You should be mindful of the role and responsibilities of a controller, and ensure that you have a sufficient understanding of the technologies to be used in order to be able to exercise genuine control over the processing.

The importance of understanding controller/processor relationships is set out in our guidance on the AI auditing framework and is relevant to processing taking place in the context of data analytics.

Ensure that logging functionality is operational for your system before it is deployed

If you operate automated processing systems (any IT database), section 62 of the DPA18 requires you to keep logs for at least the following processing actions:

  • Collection
  • Alteration
  • Consultation
  • Disclosure (including transfers)
  • Combination
  • Erasure                        

These logs must also record the justification for, as well as the date and time of any access to or disclosure of the data. You must also log the identities of the persons who have consulted, disclosed or received the data.

There are however limitations to what you can use logs for. Any logs that you keep for the above processing actions may only be used for one or more of the following purposes:

  • to verify the lawfulness of processing;
  • to assist with self-monitoring by the controller or the processor, including the conduct of internal disciplinary proceedings;
  • to ensure the integrity and security of personal data; or
  • the purposes of criminal proceedings.


If an officer or member of police staff is suspected of inappropriately accessing your data analytics system to look at information relating to an acquaintance, the logging should show what was available to them at the time, which will assist with any potential internal investigations.

You, and any associated processors, are required to make these logs available to the Commissioner on request.

Ensure that contracts are in place to manage any supplier or processor relationships to enable you to meet your obligations under data protection law

Where any of these ongoing relationships exist, it is crucial that the controller maintains oversight of operations. Data breaches by third-party processors will have a direct impact on controllers. Even where a controller does not appear to be directly responsible for a particular incident, these types of breaches can erode public trust in the data analytics system – particularly in a law enforcement context.

You are responsible for ensuring your system for data analytics is compliant with data protection law, not the provider. If you procure the system from a third-party supplier that is off the shelf and does not contain inherent explainability, you may need another model alongside it.

Section 59 of the DPA18 sets out a number of terms that must be included in the contract between a controller and any processor.

Review whether any data sharing will take place, and implement appropriate data sharing agreements to govern it

Data sharing agreements are likely to be required where data is to be shared with other controllers, eg if you are a joint controller, or if you are disclosing information to, or receiving information from, another organisation. A good data sharing agreement will help you to demonstrate accountability, a key principle of data protection law.

The data sharing agreement must set out each controller’s respective responsibilities (for example, for personal data breaches), and must designate which controller is to be the contact point for data subjects.

You should make clear in the agreement how any data shared will be secured, and how any requests from data subjects to exercise their rights will be considered. You should also consider how accuracy will be maintained.

Any data sharing agreement should be reviewed and signed by your organisation’s DPO.

You are legally required to integrate data protection by design and default into your project

Although data protection does not dictate how designers should do their job, if you use data analytics to process personal data, you must show how you have integrated data protection into your project. Examples of how you can mitigate risks to information rights include privacy-enhancing technologies, such as pseudonymisation or anonymisation of personal data.

Controllers should consider that if the information of living persons is being used, even during development, then personal data is being processed. You should ensure that risks around this use of data are documented and mitigated. You may wish to consider the use of synthetic data in the development of your system.

For more information, read our guidance on data protection by design and default in our Guide to Law Enforcement Processing.

Read our guidance on AI and data protection on understanding design choices better, so you can design high-performing systems whilst still protecting the rights and freedoms of individuals. 

Review our guidance on competing interests in AI systems

Your use of data analytics must comply with the requirements of data protection law. However, there can be a number of different values of interest, which may at times pull in different directions. For example, you may find that collecting more data improves the statistical accuracy of your data analytics system but risks contravening the data minimisation principle. Collecting less data may negatively impact the statistical accuracy but risks contravening the fairness principle.

You should be conscious of competing interests whilst procuring or developing any technology, and ensure that they are documented and monitored throughout the project lifecycle.

More information on competing interests and how to manage them is presented in the guidance on the AI auditing framework.

Consider the risks associated with the creation of new personal data

You will need to ascertain whether your system is capable of creating new personal data, and document this in your DPIA alongside the measures in place to mitigate any associated risks. For example, data creation could occur as a result of data matching.

Creating personal data brings with it responsibilities. In particular, you will need processes in place to ensure that you are able to fulfil your data protection obligations in respect of any new data created. This includes obligations around transparency and data subjects’ rights.

You will also need to consider the requirement to ensure data accuracy, and what measures you can implement to provide assurance in this respect. These could include quality assuring data sources, or implementing reviews of created data. Any such measures should be documented in your DPIA.

Consider and document how the use of data analytics could directly and indirectly affect individuals, including any mitigating or justifying factors

Aside from the direct impact of acting on a recommendation made by the system, you should consider other, more subtle impacts. For example, what is the long-term effect of a person becoming a data point in an analytics system? Are they effectively tagged with certain characteristics moving forward? What impact could this have on their life? You should be clear what the purpose of the processing is, what the intended outcomes are and monitor the deployment of the system for impact on individuals or the gradual widening of the use of data outside of its intended purpose (often referred to as ‘function creep’).

Considering the impact of the processing on an individual should form part of your assessment of the necessity and proportionality of processing.

Consider the statistical accuracy of your data analytics system in order to demonstrate compliance with data protection’s fairness principle

The GDPR mentions statistical accuracy in the context of profiling and automated decision making at Recital 71. This states organisations should put in place ‘appropriate mathematical and statistical procedures’ for the profiling of individuals as part of their technical measures. Any factors that may result in inaccuracies in personal data should be corrected and the risk of errors should be minimised.

Though Recital 71 is intended for interpretation alongside the GDPR, it highlights the relevance of statistical accuracy to the fairness principle; you should only handle personal data in ways that people would reasonably expect, and not use it in ways that have unjustified adverse effects on them. Implementing the measures suggested in Recital 71 can therefore help you to demonstrate compliance with this principle.

See our guidance on what you need to do about statistical accuracy in our guidance on the AI auditing framework.

Review the accuracy of data used to train your data analytics system

Data protection law requires controllers to ensure that the data they process is accurate and provides individuals with the right to rectify inaccurate data. It is therefore likely that your organisation has measures in place to ensure the accuracy of the data you process. However, you should consider reviewing these measures to ensure that they remain sufficient.

When adopting an AI system, all staff responsible for its development, testing, validation, deployment and monitoring should be adequately trained to understand accuracy requirements and measures. You should also consider whether any quality assurance measures should be taken before you process any personal data and what quality assurances will be required in the longer term. These measures could include dip-sampling data.

Ensure that measures are in place to identify, monitor and mitigate the risk of bias and discrimination throughout the project lifecycle

Bias may present itself at any stage of a project lifecycle from initiation to design, testing through to deployment, sale or repurpose. Data protection law is intended to balance the right to the protection of personal data to its function in society. Processing of personal data which leads to discrimination and bias will impact on the fairness of that processing. This poses compliance issues with the fairness principle as well as risks to individuals’ rights and freedoms – including the right to non-discrimination. Furthermore, the GDPR specifically notes that organisations should take measures to prevent ‘discriminatory effects on natural persons’.

You should determine and document your approach to bias and discrimination mitigation from the very beginning of any data analysis lifecycle, so that you can take into account and put in place the appropriate safeguards and technical measures during the design and build phase. You should satisfy yourself that the data used to train your system is representative of the population you are conducting data analysis on.

Measuring bias and discrimination is not a static measure. While you may measure on static test data during the design phase, in real life situations, your solution will be applied to new and changing populations. Just because a system is not bias or discriminatory initially does not mean that it will continue to be if there is a change in the characteristics of that population who the solution is applied to in future.

This phenomenon is sometimes referred to as ‘concept/model drift’, and various measures exist for detecting it. For instance, you can measure the distance between classification errors over time; increasingly frequent errors may suggest drift. You should regularly assess drift and adjust the algorithm according to new data where necessary.

For more information about addressing bias, read ‘How should we address risks of bias and discrimination?’ in our guidance on AI and data protection. 

Review our guidance on security to help you ensure that you have reasonable assurance of the technical and organisational security measures in place

Compliance with data protection security requirements may be more challenging for data analysis systems than for older, more established technologies. For example, your use of data analytics may rely heavily on third-party code or relationships with suppliers. 

As well as the technical security of your data analytics system, you should consider which organisational security measures you will need to implement, such as system-specific staff training and internal guidance.

Read our guidance on security in the Guide to the GDPR, and the ICO/NCSC Security Outcomes, for general information about security under data protection law. Though our security guidance focuses on general processing rather than law enforcement, the principles remain relevant across both regimes.

Read our guidance on the AI auditing framework for more specific information about security of AI systems under data protection law.

Consider whether further measures can be implemented to reduce the amount of personal data collected, or to enhance privacy

Modern data analytics can uncover previously undetected patterns, which can tempt you to collect more data in order to gain additional insights. However, before you start processing any personal data you must be clear about your purposes for using data analytics and what personal data is necessary for you to achieve those purposes. 

There are a range of techniques for enhancing privacy which you can use to minimise the personal data being processed at the training phrase, including:

  • perturbation or adding ‘noise’;
  • synthetic data; and 
  • federated learning.

Read our guidance on the AI auditing framework for more information. 

Consider how you will be able to fulfil requests from individuals to exercise their rights

You should consider how you will be able to fulfil a request to exercise any of these rights when implementing a data analytics product.

Read our guidance on individual rights in the Guide to Law Enforcement Processing and on how individual rights apply to different stages of the AI lifecycle in the guidance on the AI auditing framework. 

Consider how the right to be informed applies to your project, and how you can fulfil its requirements

There has been significant attention in respect of the use of data analytics software in law enforcement, particularly in relation to AI. You should consider the benefits of making information about how your organisation uses data analytics publicly available. Doing so can increase public trust and confidence.

Publishing information about how you are using personal data will help you to demonstrate compliance with the fairness and accountability principles of data protection law. You should consider how, in practice, data subjects will have sufficient awareness that their personal data is being processed without such information, and consequently how this may affect their ability to exercise their data protection rights.

You can find out more about the information you are required to publish and applicable exemptions in our guidance on the right to be informed.

In addition to obligations under Part 3 of the DPA18, you should consider your duties as a public authority under the Freedom of Information Act 2004 (FOIA). Given the current interest in data analytics, AI and machine learning, your organisation might receive FOIA requests for information about the use of such technologies. You may wish to consider proactively publishing information on your data analytics project as part of your organisation’s publication scheme.

Consider how you will ensure that any human intervention in the decision-making process is meaningful

When using data analytics in a law enforcement setting, human intervention and explainability is very important. Decisions informed by systems using data analytics could have a significant impact on an individual, and may be scrutinised by a court.            

To ensure that human intervention is meaningful, you should:

  • consider the necessary system requirements to support a meaningful human review from the design phase, particularly the interpretability requirements and effective user-interface design to support human reviews and interventions;
  • design and deliver appropriate training and support for human reviewers; and
  • give staff the appropriate authority, incentives and support to address or escalate individuals’ concerns and, if necessary, override the AI system’s decision.

The above list is not exhaustive. You should read our guidance on explaining decisions made with AI on how, and to what extent, complex AI systems might affect your ability to provide meaningful explanations to individuals.

Back to top