Previously asked questions
-
Due to the Data (Use and Access) Act coming into law on 19 June 2025, this guidance is under review and may be subject to change. The Plans for new and updated guidance page will tell you about which guidance will be updated and when this will happen.
Latest updates - last updated 16 December 2025
16 December 2025 - We have added the following questions to existing sections:
- "Could information gained from a paperless parking permit look-up system be classed as personal information?" (Is it personal data?)
- "Is my organisation a controller or processor?" (Controllers and processors)
- "What are the controller-processor relationships in a multi-organisational project?" (Controllers and processors)
- "What lawful basis and Article 9 condition would be most appropriate for a AI review system that aims to identify and remove harmful content?" (Lawful basis)
- "Would legitimate interests or consent be the most suitable lawful basis for collecting information about club members?" (Lawful basis)
- "Is consent required to gather large amounts of CCTV footage to test an AI system?" (Lawful basis)
- "Can mobile phone ‘signatures’ be collected to estimate the number of people in an area without consent?" (Lawful basis)
- "How can we ensure that we have received appropriate consent from consumers to receive marketing when they comment on a social media post?" (Lawful basis)
We have also added a new section on automated decision-making, with the question "Would the assignment of a ‘Quality Indicator Score’ constitute profiling?".
11 April 2024 - We have added a new section about PECR.
21 August 2023 - We have added "Who is the controller for information entered on an external mobile application?" to the Controllers or processors? section.
- Is it personal data?
- Controllers and processors
- Lawful basis
- Is this direct marketing?
- Automated decision-making
- Generative AI
- Restricted transfers
- Special category data
- Cloud storage
- Data sharing
- Effective anonymisation
- Subject access requests
- Data clean rooms
- Biometric data
- Children’s data
- Privacy and Electronic Communications Regulations (PECR)
- Facial recognition technology
Is it personal data?
Q. Is a number plate - also known as a vehicle registration mark (VRM)- considered personal data?
Context: The business is planning to use drones and other video camera technology, to monitor road traffic to measure rates of pollution. This would involve capturing VRMs.
Answer: Yes - a VRM is personal data if it can be combined with other information that then distinguishes and allows for the identification of an individual.
Depending on why the VRM is being collected and used (the purpose), will determine whether it is considered personal data. For example, if the business monitored the length of time cars spend inside and outside of defined spaces, this may require linking the VRMs with other information and thus the singling out of individuals. Even if the business did not have access to, for example the DVLA registered keepers database, the business are still likely to be able to identify individual vehicles, their age and where they were registered.
Next steps:
- Consider the definition of personal data under UK GDPR.
- Consider if the VRM can be combined with other information that allows for an individual to be identified.
- Consider the purpose of processing the VRM.
- Decide if VRMs are anonymous in the context of your purposes.
- Decide if you need to complete a DPIA.
Additional advice:
If the information is made available to third parties, the purpose of the processing by the third party will determine whether the VRMs are personal information.
Q. Could information gained from a paperless parking permit look-up system be classed as personal information?
Context: A local authority operates a paperless parking system. They want to also set up a system to allow residents to check whether a vehicle is permitted to park within their local parking zone.
Answer: To use the look-up system, a resident will enter the vehicle registration mark (VRM), and the system will then confirm whether or not that vehicle is permitted to park in that area. Our view is that the information provided by itself would not allow for identification of an individual. This is because, in isolation, whether or not a vehicle is currently permitted to park in a specific area does not provide any information about an individual. For example, someone could live in an area but not have applied for a permit, or there could be multiple drivers of a vehicle. The user of the system could not determine who the vehicle’s owner is from the information provided.
However, the users of the system may be able to combine this information with other information they have access to. For example, they may recognise the driver or owner of the vehicle if they see them exiting or entering it. This could allow residents to identify and/or single out someone who is parking in an area where they do not have a permit.
Organisations that want to use a system like this should consider how likely it would be that an individual could be identified from other information available to them. If is it reasonably likely that the driver/owner of a car could be identified, then the system may provide information relating to an identifiable person, and as such the organisation should ensure they comply with UK GDPR.
Additionally, organisations using similar systems should be clear about the aims of their system, and build safeguards in to ensure personal information provided is not misused. For example, if the information could then be used to inform enforcement action, the organisation needs to be clear what the risks this could pose, such as whether information provided by users of the look-up system is accurate.
Next steps:
Q. Are the providers of deep learning AI solutions processors or controllers?
Context: A UK business (Business ABC) wanted to know whether a third-party supplier offering a deep learning AI solution to them would be considered a processor or a controller for the purpose of providing an artificial intelligence led service to their customers.
Answer: If the third-party supplier (the AI developer) acts under ‘Business ABC’s’ instruction they are a processor.
If the AI solution provider processes personal data for any other reason than what they have been instructed to do, or where domestic law requires them to do so, then they will be a controller or joint-controller for this processing. For example, where they process personal data to build another model.
Next steps:
- Review our guidance on controllers and processors.
- Create clear roles and responsibilities for the different processing activities within the contract between you and the AI developer.
- Notify individuals of the processing activities through clear and easy to access privacy information to make sure transparency requirements are being met.
- Review the lawful basis being relied on.
- Review how individuals can enact their rights.
- Consider the possibility of de-identification techniques before personal data is shared with the AI developer.
Q. Is my organisation a controller or processor?
Context: The product being developed is a new online directory service, connecting service providers with clients and prospective clients. They want to know whether they would be a controller or processor for the information collected from and on behalf of the service providers.
Answer: The directory service involves collecting information from individuals who want to contact service providers and facilitating communication between them. Whether the organisation acts as a controller, joint controller, or processor depends on who makes decisions about the collection and use of personal information. Where the provider of a directory acts on the instructions of the service providers they are listing, the directory provider is a processor. Where the service providers and directory provider make decisions jointly, they are joint controllers. If the directory creator make decisions independently, they are a controller. These roles carry different responsibilities under data protection law.
For example, if the service providers decide what client information is collected and how it is used, the organisation creating the directory would be a processor. In this case, the organisation must ensure service providers understand their responsibilities, such as determining the lawful basis for data collection, retention periods, and usage. Similarly, for internal messaging between clients and service providers, the organisation that created the directory would likely be a processor because they only facilitate communication without using the information themselves.
However, if the organisation collaborates with the service providers on what information to collect, they would become a joint controllers and must document responsibilities clearly.
If the directory creator make decisions alone about what information to collect and how to use it, they are a controller and must comply fully with data protection obligations. This includes responding to rights requests, securing information, and reporting breaches. They would also be a controller for any information collected about the service providers for the directory creators own purposes or for analytics on service users.
Conversely, they are not responsible for any information exchanged directly between clients and service providers outside of their platform.
To ensure compliance, organisations should review each type of information they handle, document their reasoning, and draft contracts outlining roles and responsibilities. Finally, organisations should not that they still have legal duties under data protection law if they are a processor. They should ensure that they understand these responsibilities so that they can stay in compliance with legislation.
Next steps:
- Review our guidance on controllers and processors.
Q: Who is the controller for information entered on an external mobile application?
Context: The organisation is in the health sector and want to make use of external mobile applications to assist with monitoring of and making clinical decisions about patients’ chronic conditions. The organisation advises they have limited control over what personal information is entered onto the applications.
Answer: It is likely that the organisation will be a controller for the information a patient enters onto a mobile application if they are recommending the use of these to patients, or agreeing to view and use the information entered onto them for clinical purposes. The organisation has made the decision to make use of the apps, and will be using the personal information entered upon them to make decisions about the patients as a result.
The relationship between the organisation and the provider of the app could come under a number of different controller relationships. Whether an organisation is a controller, joint controller or processor will depend on the specific use of personal information.
If the app provider will not be using the information for their own purposes, and are merely processing this on behalf of another organisation, then they will qualify as a processor, even if they have determined some non-essential means of processing.
If the app provider are looking to use the information for their own purposes, this may be either a controller to controller relationship, or they may be joint controllers with the organisation. To be joint controllers, the app provider and the organisation will need to be processing for the same or closely linked purposes, and have jointly determined how and why they are using the personal information. If they do not have jointly determined purposes and means, but the app provider intends to use the personal information for their own purposes, then they would be a separate controller.
Organisations should ensure they have assessed the risks and benefits of the use of any external applications and will need to ensure the appropriate data sharing agreements or contracts are in place, depending on whether the app provider is a processor, controller or joint controller.
Next steps:
- Review the ICO’s guidance on controllers and processors.
- Consider whether a DPIA needs to be completed.
- Ensure users of the app are provided with adequate privacy information.
Q. Would a company that develops AI software be a controller or processor for information shared with it by a service user?
Context:
The organisation would like to use an AI service to help their clients. The AI service would like to use the information shared by the organisation to improve the AI system. The AI service believes they are joint controllers with the organisation, so they would be able to use the information in this way. However, the primary organisation believe that the AI service is a processor.
Answer:
Organisations can be controllers, joint controllers or processors depending on their role in the use of personal information.
Organisations should consider to what extent the purposes of the processing have been jointly determined, and to what extent any purposes of processing are separate and distinct. If an organisation and their AI service provider have made a joint determination as to the processing activities, they may be joint controllers.
If, however, the organisation is determining how the information shared is used, and the AI service is acting under instruction, the AI service provider is likely to be a processor.
If the AI service provider wants to use the information shared to train their model, and the original organisation is not involved in this, the AI service provider would be a separate controller. They would need to ensure this processing is lawful, and in compliance with the UK GDPR.
Whichever roles the organisation and AI service provider occupy should be set out in agreements drafted before any sharing takes place. Controllers and processors should have a written contract in place. If data sharing is taking place between two controllers, a data sharing agreement is recommended. This should set out how the AI service provider can use the information shared with them for any processing which is beyond the scope of the controller/processor relationship.
Next steps:
- Read further guidance about controllers and processors
- Review our data sharing code of practice
Q: Who is the controller for dashcam footage hosted by the vehicle manufacturer?
Context: A car manufacturer wants to add functionality to the cameras on a vehicle and will host these recordings on their servers. The manufacturer has put restrictions on how often the new functions can be used by the customer, and have decided how long they will keep the recordings before they are deleted. The manufacturer however does not have a purpose for accessing the footage, and are wanting to know whether they or their customer would be the controller for these recordings.
Answer: A controller is the entity which is making decisions about the means and purposes of the data processing. This includes decisions such as setting retention periods, deciding how information will be stored, and ensuring appropriate security measures are implemented.
As the manufacturer is setting the retention periods, and putting limitations on the use of the cameras, it is likely that they will be a controller for the recordings.
An individual who processes personal data for ‘purely personal or household activity’ will not be subject to UK GDPR, and therefore will not have controllership obligations. As such, if a customer is using the dashcam footage only for personal use, the UK GDPR will not apply, and they will not be a controller.
There are situations in which a customer could be a controller for a dashcam recording. Controllership will depend on what they do with a particular recording, and whether this is beyond ‘purely personal’ usage. For example;
- If a customer posted the footage online so it was publicly available (available beyond their friends and family), this would no longer be purely personal use, and the UK GDPR would apply.
- If the vehicle isn’t just used privately, and is used to carry out business, the UK GDPR would apply to the business’s processing of the dashcam footage.
If a customer were to become a controller, their purposes would likely be separate and distinct from the manufacturer. This would mean they are likely to be separate controllers, not joint controllers, or controller and processor.
As the manufacturer is likely to be the controller, they should think about what safeguards they can put in place to minimise any risks that their cameras’ new functionalities pose.
Next steps
- Review our guidance on controllers and processors
- Review our guidance on surveillance in vehicles
Q. Is the developer and supplier of an AI system a controller or processor?
Context: the organisation is developing products that can be used to create transcriptions from calls. The organisation will hold call recordings within their system, create the transcripts, then delete the recordings. The transcripts will be held until the client requests they are deleted.
Answer: Controllers are organisations that determine the purposes and means of processing personal information. Processors are organisations that process personal information on behalf of a controller. They can only process personal information in line with a controller’s instructions, unless it is required to do otherwise by law. If they use personal information for their own purposes, they become a controller for that personal information.
Processors can make some decisions about certain technical aspects of the processing. In this case, the organisation will decide the retention periods for the audio files once they have been used to create the transcripts. While this organisation has established these terms, their client is likely to still be the controller, as they are choosing to use the service on this basis.
If the organisation decided to make changes to these retention periods, or other aspects of the processing, without the agreement of their client, it is likely they would become a controller for this data, and have responsibilities for compliance with UK GDPR.
The organisation should keep their controller/processor relationships under review, and consider if they are still accurate should there be any significant changes to the processing.
However, the organisation would be the controller for any information they receive to help train their AI model, as this is processing they would undertake for their own purposes. For each processing activity, the organisation should have suitable written agreements in place with their clients that fully outline the roles and responsibilities of each party.
Next steps:
- Read our guidance on controllers and processors
- Ensure contracts in place comply with our guidance on contracts
- Review our data sharing code of practice where applicable
Q. What are the controller-processor relationships in a multi-organisational project?
Context: The organisation is part of a multi-organisational team aiming to protect vulnerable service users. The organisation creates dashboards using information collated from other organisations, and then shares these to ensure help is provided where it is needed. The organisation that commissioned this work consider themselves to be joint controllers, but the organisation carrying out the work does not agree. This is because the commissioning organisation do not receive any personal information as part of the dashboards.
Answer: It is up to the organisation to determine whether they are a controller, joint controller, or processor. Controllers (alone or jointly) make decisions about how and why personal information is used. Processors act on instructions of controllers. They can make some “non-essential" decisions about how information is used, such as what IT systems or security to use, however, the main decisions are made by the controller.
In a situation where one organisation commissions work from another, they need to review where decisions are made, not where personal information is used. The organisations should identify between themselves where the decisions are made, and whether the commissioned organisation has freedom to determine why and how personal information is used.
The organisation should review the contract they have and discuss with the organisation that commissioned them so each party is aware of their roles and responsibilities. Even if the contract does not specifically state the controller/processor relationship, it may contain elements indicating how much freedom the organisation has to make decisions about how and why information is used to create the dashboards they produce.
Next steps:
- Review our guidance on controllers and processors.
- Although they do not apply to the UK directly, the EDPB guidelines on controllers and processors give detailed information about how to determine joint controllership
Q. What lawful basis should we rely on and what privacy enhancing techniques should we use when collecting, processing and storing employee diversity data?
Context:
The organisation is planning voluntary employee diversity monitoring surveys in-house. The surveys will not collect any directly identifiable information (eg employee names) but the organisation is concerned that - if the information collected is combined with information it already holds about employees – it might identify them. As some of the data collected will be special category data, the organisation wants to ensure it uses the right lawful bases and understand how to effectively anonymise or pseudonymise the data.
Answer:
Lawful basis
If the organisation can demonstrate that they meet the conditions for valid consent, they can rely on consent as their Article 6 lawful basis, and explicit consent as the condition for the processing of special category data.
However, there is a power imbalance in the employer-employee relationship. Employees may feel obligated to agree to the processing as they’re concerned about negative impacts if they don’t. Organisations need to be conscious of this when relying on consent to process employee information and take steps to ensure that the employee does not feel any pressure to consent. They should also allay any concerns over the consequences of refusing consent.
If the organisation is unable to demonstrate valid consent, it could rely on legitimate interests as the Article 6 lawful basis and substantial public interests for the processing of special category data.
Choosing which lawful basis applies depends on your specific purposes and the context of the processing. For example, your purpose may relate to a legal obligation or performing your public tasks.
Privacy protections
Our draft guidance on anonymisation explains that data protection law does not require anonymisation to be completely risk-free. However, you must be able to mitigate the risk of re-identification until it is sufficiently remote that the information is ‘effectively anonymised’.
Anonymisation means that individuals are not identifiable and cannot be re-identified by any means reasonably likely to be used. Anonymisation processes should take into account the concept of identifiability in its broadest sense, and should not simply focus on removing obvious information that clearly relates to someone. The removal of direct identifiers such as a name or an identification number is insufficient to ensure effective anonymisation.
If the organisation can identify an individual by combining the survey information with other data it holds – the information is pseudonymous not anonymous. This means the survey data is subject to data protection legislation and the organisation should consider encrypting the information and using privacy enhancing technologies (PETs). These could include pseudonymisation techniques to reduce the risk of processing, meaning the organisation could use the data for things such as statistical analysis.
Next steps
Lawful basis
- Review the ICO’s consent guidance and make sure consent and explicit consent are freely given with no pressure or consequence; or
- Review the ICO’s guidance on legitimate interest and document the reasoning, possibly through a legitimate interest assessment (LIA).
- Review our guidance on lawful basis for processing.
Privacy protections
Q. Is legitimate interests and substantial public interests an appropriate lawful basis and condition for processing special category data for biometric multi-factor authentication?
Context:
The organisation would like to change from consent as their lawful basis, and explicit consent for processing special category data to legitimate interests and substantial public interest respectively. They would like to do this to reduce the risk of fraud to their customers.
Answer: Organisations must determine their lawful basis before they start processing, and that they should not swap to another basis without a good reason. If there is a genuine change in circumstance or there is a new and unanticipated purpose which means there is a good reason to review the lawful basis, an organisation must be able to document and justify this.
Before using legitimate interests as a lawful basis, we recommend that organisations complete a legitimate interest assessment (LIA).
In order to rely on legitimate interests an organisation must undertake a three part test:
- Identify a legitimate interest(s),
- Demonstrate that the processing is necessary to achieve it, (ie it’s a targeted and proportionate way of achieving your purpose); and,
- Balance the identified legitimate interest against the individual’s individual interests, rights and freedoms.
The organisation must be able to satisfy all three parts of the test before they start relying on legitimate interests.
In this case, the recitals of the UK GDPR state that fraud prevention could constitute a legitimate interest.
However, an organisation needs to demonstrate that the use of biometric MFA is proportionate and adequately targeted in order to pass the ‘necessity’ test.
In order to rely on substantial public interest as a condition for processing special category (in this case, biometric) data, an organisation must demonstrate its necessity and why consent is not suitable. This would mean making a specific arguments about the wider benefits of the processing rather than a vague or generic public interest argument. An organisation could consider how the processing benefits the public experience from the processing, and the volume of people benefiting from the processing, for example.
Next steps:
- Review our guidance on legitimate interests
- Complete a legitimate interest assessment
- Identify an appropriate substantial public interest condition
Q: Would consent be an appropriate lawful basis for the use of artificial intelligence (AI) operated cameras in care home bedrooms?
Context:
An organisation wants to install cameras in the private rooms of care home residents. These cameras will use AI to alert staff to falls or potential falls. They would like to rely on consent and explicit consent in order to do this.
Answer: Valid consent has to be a freely given, specific, informed and unambiguous indication of an individuals wishes. Therefore, consent can only be valid if people are given a genuine choice, and are not placed at a disadvantage if they refuse.
Where consent by a resident is refused, the cameras could be turned off in that particular room. However, any visitors or staff members entering private rooms would also need to agree to the use of the cameras. If staff refuse consent they may not be able to perform their duties and if visitors refuse they may not get to visit the resident. Therefore, these groups would be placed at a disadvantage if they refuse consent.
The cameras would capture information about the health of residents, as well as biometric data of individuals. Therefore, the organisation would also need to have a separate condition for processing special category data – in this case, explicit consent. In order to rely on explicit consent, they need to demonstrate that the use of the cameras to monitor residents for falls is necessary and proportionate. As with consent, explicit consent must be clear and freely given. Due to the power imbalance between the organisation and the residents, staff, and visitors, it is unlikely that explicit consent would be appropriate.
If an organisation decides not to rely on consent or explicit consent, they would need to identify a different lawful basis and condition for processing special category data. However, in order to rely on these the organisation would need to demonstrate that the use of these cameras in this way is necessary and proportionate. This means that if they can reasonably monitor their residents for falls and accidents using a less intrusive means they would not have a lawful basis for using the cameras.
For example, one alternative lawful basis could be legitimate interests. This can be used where it is in the legitimate interests of the organisation to use cameras in this way, but the organisation’s interests do not override the interests of the residents, staff and visitors.
Therefore, the organisation could only rely on this if they can demonstrate that:
- the use of the cameras is a necessary and proportionate way of monitoring their residents; and
- the use does not infringe on the rights and freedoms of those captured by the cameras.
Next steps:
- Review our guidance on consent and explicit consent
- Read more about other lawful bases
- Don’t forget to identify a separate condition for processing special category data
Q. What is an appropriate lawful basis for flagging information about vulnerable individuals?
Context:
The organisation is looking to use data clean room technology to help their clients share information about vulnerable customers. This would help these organisations exercise their duty of care obligations more effectively.
Answer:
If an organisation is considering relying on consent when handling customer information, they must make sure the consent is freely given, specific and informed. An individual must understand what they are agreeing to. If they do not understand, the consent would not be valid. As this question relates to vulnerable customers, there is a chance that some customers may not be able to provide valid consent.
Where there is a statutory obligation to protect vulnerable customers, an organisation could rely on the ‘legal obligation’ lawful basis. This applies where the obligation is laid down by UK law, but does not mean it has to be an explicit statutory obligation. Where there are regulatory requirements that have a statutory basis underpinning the obligation, this would qualify as a legal obligation.
Where an organisation is able to balance their interests against those of vulnerable individuals, they could rely on legitimate interests. The organisation should perform a balancing test to demonstrate that they have considered the interests, rights and freedoms of the individuals. They should document this in a legitimate interest assessment (LIA).
Where the information includes special category data, organisations also need an additional condition under Article 9 of the UK GDPR. In the case of information about vulnerable individuals, it is likely that some special category data, such as details about health, would be used. Conditions that an organisation could use include explicit consent or substantial public interest, for example. There is, however, no equivalent to legitimate interests included in this list. For criminal offence data, a similar obligation to identify an additional condition applies under Article 10.
Next steps
- Review our guidance on lawful basis, including:
- Where required, complete our LIA template.
Q: We want to run a trial of age estimation technology, can we do this with legitimate interests as our lawful basis?
Context: The organisation wants to run a ‘shadow trial’ of age estimation technology in two of their premises before rolling it out more widely. The trial will run in the background, and won’t be used to make any decisions about individuals. There will still be human verification of an individual’s age during the shadow trial.
Answer: Organisations should undertake and document a three-part legitimate interests assessment before processing begins in order to determine if it is an appropriate lawful basis. This can be as a standalone document, or as part of a Data Protection Impact Assessment (DPIA).
Firstly, there must be a legitimate interest. This interest does not need to be very compelling and can be a commercial interest. The organisation will have a legitimate interest in trialling their age estimation software to evaluate how effective it is.
Secondly, the processing must be necessary to achieve the legitimate interest. This means the processing must be a targeted and proportionate way to meet the organisation’s interest, and not able to be achieved in a less intrusive way. The organisation should consider the number of locations at which they deploy the technology, and amount of time it is active for, and be satisfied that this would process the minimum amount of data needed for their trial to be effective.
Finally, they must perform a balancing test, where the interests, rights and freedoms of the individuals are taken in to account, and it is checked that these do not override the organisation’s interests. The processing should not be unexpected or have any unjustified impact on individuals. The organisation should thoroughly consider the possible impacts of the processing, and must be confident their interests are not overridden by the risks to individuals in order to pass the balancing test.
As age estimation technology is not commonly used, it is important to provide privacy information in a way that is clear and easily understood. The organisation could consider using a layered approach.
Should the trial progress to a live trial, or to full deployment, the organisation should consider whether they would be legally required to complete a DPIA, and whether Article 22 would apply to the processing.
Next steps
- Review our guidance on how to complete a legitimate interest assessment.
- Review our guidance on when to do a DPIA.
- Review our guidance on Article 22 of the UK GDPR rights related to automated decision making including profiling.
Q. What consent would an organisation need from their customers in order to use a third party supplier to generate call transcripts?
Context: an organisation uses a third-party supplier of AI software that generates call transcripts for their clients. They want to know what consent would be required from customers in order to use the transcription service.
Answer: In order for consent to be valid, it must be “freely given, specific, informed” and the individual must make a clear affirmative action to agree to the processing.
This means that the organisation is unlikely to be able to gain consent via an agreement with their privacy policies, as to be ‘freely given’, consent generally cannot be bundled up as a condition of service unless it is necessary for that service. An individual must be able to refuse consent, and still be able to use the services. The consent request would need to be separate from the privacy policies, and the organisation would not be able to use the transcription service where an individual has not consented.
The consent the organisation requires from their customers will depend on the type of information they are collecting. If they are collecting information about criminal offences, or special category data, such as health information, then they will need an additional level of consent – explicit consent.
Explicit consent likely requires additional steps such as:
- confirmed in a clear statement (whether oral or written), rather than by any other type of affirmative action;
- it must specify the nature of the special category data; and
- it should be separate from any other consents being sought.
The organisation should ensure that their customers are aware that a third party will be processing their information and why. The customers should be made aware of the third party’s role, what their rights are, and how they can withdraw consent.
Consent is not the only lawful basis that may be available to the organisation, and may not be the most appropriate basis for the processing related to creating the call transcripts. It is the organisation’s responsibility to ensure they have considered all the lawful bases, and Article 9 conditions where necessary, and determined which would be the most appropriate before they use the transcription services.
Next steps:
- Read our guidance on consent
- Determine whether you need a separate condition for special category or criminal offence data
- Review our guidance on other lawful bases, including legitimate interests
Q. What lawful basis and Article 9 condition would be most appropriate for a AI review system that aims to identify and remove harmful content?
Context: the organisation is hoping to launch software that would help protect children from creating, viewing or sharing harmful content. Parents and children would both have to consent to the use of the software. Initial versions of the software would not allow for human review when decisions are disputed. The organisation is concerned that deletion of created content could infringe on Intellectual Property legislation.
Answer: Under Article 22 of the UK GDPR, organisations cannot make a decision about an individual using solely automated means if it has a legal or similarly significant effect on them.
A legal or similarly significant effect relates to a decision that affects a person’s legal status or legal rights. Infringement of other legislation, such as Intellectual Property legislation, would not necessarily affect an individual’s legal status or rights as covered by the UK GDPR. However, we recommend that organisations obtain independent legal advice if they are concerned about the application of laws that we do not regulate.
Organisations should note that depending on how the child uses their device, deletion of material may reach the threshold for legal or similarly significant effect. For example, if deletion of material leads to discrimination or exclusion, it could be considered a “similarly significant effect”. Organisations should also consider factors such as whether the child would suffer any financial loss, and whether the deletion would disproportionately impact on any of the child’s rights such as freedom of expression, or ability to access information.
There are exceptions under Article 22 that would allow an organisation to make decisions based on solely automated processing. These are where the processing is explicitly consented to, it is required for a contract, or authorised by law. However, if an organisation relies on consent or contract, they still need to offer individuals the opportunity to contest a decision and have a human review. Where there is a risk a human reviewer could view illegal content (for example, child sexual abuse material), this might not be possible.
No matter how decisions are made, organisations must ensure they consider the best interests of the child throughout the product’s development. This means all decisions and explanations should be age appropriate, and different controls may be required, depending on the age and understanding of the user.
Next steps:
- Read our guidance on automated decision-making and profiling.
- Review our age appropriate design code.
Q. Would legitimate interests or consent be the most suitable lawful basis for collecting information about club members?
Context: The organisation is an association that runs a variety of local clubs. Each member club pays a subscription on behalf of their members to be part of the organisation. The organisation collects some contact information for members that log into their website to use additional services. They want to start collecting additional information about members, so that they can contact them directly when required.
Answer: When collecting and using personal information about members, organisations must choose an appropriate lawful basis under UK GDPR. If data is collected directly from members, this is straightforward, but if it is obtained via clubs, data sharing agreements may be necessary. Different uses of personal data can require different lawful bases, so it’s important to assess each purpose individually and document decisions to demonstrate compliance.
Int this scenario, legitimate interests and consent are likely to be the most appropriate lawful bases to choose from.
Legitimate interests allow processing when it is necessary for the organisation’s aims, provided these do not override individuals’ rights and freedoms. To rely on this basis, a Legitimate Interest Assessment (LIA) should be completed, covering the purpose, necessity, and balancing test. Transparency is key, so the legitimate interest must be clearly defined and explained in the privacy notice. Organisations should note that if legitimate interests is used as a lawful basis, and their correspondence involves direct marketing, individuals have an absolute right to object.
If legitimate interests are not suitable, consent may be used, but it must meet strict requirements. It must be freely given, specific, informed, and unambiguous. Members should have a genuine choice without disadvantage for refusal, and consent should be granular, ie separate for each distinct purpose. A single consent cannot cover multiple unrelated uses of information. Organisations should ensure that each use is clearly communicated with individuals so they are able to provide informed consent. Each decision made by the individual should be clearly documented so compliance is ensured.
No matter which lawful basis an organisation chooses to rely on, if they plan on sending marketing materials, such as messages promoting their organisation or events to members, they also need to comply with PECR.
Regulation 22 relates to marketing emails. These cannot be sent to individuals unless they have either given explicit consent, or are existing customers who previously used similar products or services from the organisation. In the latter case, individuals must be offered a clear opt-out option both at data collection and in every message.
There is a “soft opt-in” exemption that currently applies only to commercial marketing, meaning not-for-profit organisations must obtain consent for promotional emails. However, future changes may allow charities to rely on this exemption.
Next steps:
- Read our guidance on consent and legitimate interests.
- Review the regulation 22 PECR requirements.
Q. Is consent required to gather large amounts of CCTV footage to test an AI system?
Context: The organisation want to test ‘smart CCTV’ systems to reduce harm from dangerous events (eg a fight breaking out, or an accident) in cafes and bars. Before they can be deployed, the organisation wants to test the algorithms, and wants to gather existing CCTV footage or recreate events using actors to do so.
Answer: When processing personal data under the UK GDPR, organisations must identify a lawful basis under Article 6. While consent is one option, it is often challenging to manage, especially if individuals withdraw consent, requiring deletion of data. For consent to be valid, it must be freely given, specific, informed, and unambiguous. This means providing clear information about who is using the footage, its purpose, and ensuring individuals can refuse or withdraw consent without detriment. Consent must involve a clear affirmative action (opt-in); simply posting signs and allowing opt-outs would not meet UK GDPR standards.
An alternative basis is legitimate interests, which is most appropriate when processing is expected, has minimal privacy impact, or serves a compelling purpose. To decide if this basis is appropriate, the organisation can apply a three-part test:
- Identify the legitimate interest (e.g., testing smart CCTV functionality);
- Show processing is necessary and proportionate; and
- Balance the organisation’s interests against individuals’ rights and freedoms.
The organisation must be able to demonstrate that the use of CCTV footage is a targeted and proportionate means of achieving their purpose. They must also consider whether individuals would reasonably expect the processing, and whether it would cause unjustified harm. They should avoid filming in areas with high privacy expectations and document their assessment. Using actors in staged scenarios could also fall under legitimate interests rather than consent.
The organisation must also consider whether footage may include criminal offence data (eg, evidence of fights), or special category data. Processing this data requires additional safeguards under the UK GDPR. Organisations should note that there is no equivalent to legitimate interests under Article 9 of the UK GDPR.
Next steps:
- Read our guidance on consent and legitimate interests.
- Understand the additional requirements for criminal offence data and special category data.
Q. Can mobile phone ‘signatures’ be collected to estimate the number of people in an area without consent?
Context: The organisation wants to use a device that scans for MAC addresses. The device does not collect the addresses, with only the last 2 bytes of each address to check whether it has been identified in a scan. The device deletes this information after each scan.
Answer: Personal data refers to any information relating to an identified or identifiable individual. Identifiability means being able to distinguish one person from another using identifiers such as names, ID numbers, or other details. Under the UK GDPR, “online identifiers” are explicitly included in the definition of personal data, which means MAC addresses can qualify as personal data.
In this case, only the last two bytes of each MAC address are recorded during scans, to identify if a MAC address has already been counted. Whether this constitutes processing personal information depends on the context and whether the partial MAC address, alone or combined with other information, could identify an individual. Even without knowing a person’s name, if a MAC address is used to track a device for the purpose of singling out or treating someone differently, it is still considered personal information.
Organisations should remain aware of the risk of direct or indirect identification and the associated data protection obligations. Conducting a Data Protection Impact Assessment (DPIA) can help assess risks and is mandatory in some cases. Note that many devices use MAC address randomisation to prevent prolonged identification, which may affect both the accuracy of repeated counts and whether partial MAC addresses qualify as personal data.
Where an organisation concludes that processing partial MAC addresses constitutes personal data, they must identify a lawful basis for processing. Consent is one option, but it is not always required or appropriate. Legitimate interests may be suitable, provided the organisation complete a three-part assessment to confirm:
- A legitimate interest exists for the processing.
- The processing is necessary and proportionate to achieve that interest.
- Individuals’ rights and freedoms do not override the organisation’s interests, and they could reasonably expect the processing.
For initiatives like Smart Town projects, depending on the purpose, “public task” could also be considered as a lawful basis.
Next steps:
- Review further information about what personal information is.
- Read our guidance on lawful bases.
Q. How can we ensure that we have received appropriate consent from consumers to receive marketing when they comment on a social media post?
Context: The organisation wants to provide consumers with affiliate links in direct messages when individuals comment a specified word or phrase on a social media post. The information under the post would advise individuals to comment a specific word or phrase if they want to receive the links. These links would allow individuals to purchase the products they are interested in, whilst allowing creators to earn commission.
Answer: When sending direct marketing to social media users, organisations must comply with both PECR and UK GDPR. Direct marketing refers to advertising or marketing material directed at specific individuals.
A direct message on a social media platform providing affiliate links will count as “electronic mail”, and under PECR the organisation must usually have consent if the mail was unsolicited. Whether a message is solicited is based on who initiated the contact, and what information is sent. If a user clearly requests specific product information, and the organisation provides only that, it may be solicited marketing. If the organisation provides them with extra information that they did not request, or provides information about additional promotions, this would be unsolicited marketing.
Regardless, messages must be fair, transparent, and include an opt-out option. Organisations should note that the “soft opt-in” option applies only to existing customer relationships, not new contacts on social media.
If there is an automated system for identifying when social media users have commented the required wording, the organisation must make sure that it only picks up users who have intended to request a direct message. Comments that include the specified word or phrase but are ambiguous as to whether they want to receive the links would not meet the standard of consent. Consent must involve a clear positive act, and users must know who is sending the message, on whose behalf, and how to withdraw consent. Posting a specific form of words could be a clear positive act, as long as there is no doubt about what the user is agreeing to.
Organisations must provide privacy information at the point of data collection, explaining who they are, why they are using the information, and users’ rights. PECR also requires that marketing messages clearly identify the sender and any other company involved, and include a valid address for opt-out requests. This applies whether messages are solicited or unsolicited. People have the right to object to direct marketing, and organisations must respect this.
Next steps:
- Read our guidance on direct marketing.
- Review the requirements for consent.
- Identify the information needed to comply with the transparency principle.
Q: Would telling customers about our online fraud prevention tool be direct marketing, and therefore fall under the scope of the Privacy and Electronic Communications Regulations (PECR)?
Context:
The organisation has created a browser extension that verifies their clients’ websites as being legitimate, to protect customers against fraud. Their clients are not sure how they can let their customers know about the tool, as it may constitute direct marketing under the Privacy and Electronic Communications Regulations. The organisation also wants to know about any other compliance requirements under PECR.
Answer:
Direct marketing
Whether a communication falls under the definition of direct marketing will depend on the method by which the message is sent to a customer, and the tone, content and context of the message itself.
Direct marketing must be “directed to” particular individuals or categories of people. As such, adverts or messaging shown indiscriminately to all users of a website would not constitute direct marketing, and therefore the marketing rules would not apply. Organisations can therefore consider advertising products in this way, without needing to consider the marketing rules under the PECR.
If the communication will be directed towards particular individuals or categories of people, ie via email, telephone or text, then the content of the message itself will determine whether the PECR applies. If the message is neutral in tone, and presents a range of options that customers can take to protect themselves online, then it would be unlikely to constitute direct marketing, providing there are no other elements of the communication advertising other products and services. If however the communication focusses largely on a particular product, and is encouraging customers to buy or use this, it is likely to be direct marketing.
If the communication will be direct marketing, you will need to be aware of the marketing rules that apply to the type of marketing you wish to carry out, and ensure you have appropriate consent.
PECR Regulation 6 requirements
Aside from regulating direct marketing practices, the PECR also regulates the use of cookies and similar technologies that either store or gain access to information on a user’s device.
If your product will be using these technologies, you are required to tell your users what cookies or technologies are present, explain what they are doing and the purposes for these, before storing or accessing any information on their device.
You will also need to gain consent from your users to access or store information on their device if this is not strictly necessary to be able to provide the function of your extension. Strictly necessary means that it must be essential, and limited to what is essential to provide the service that the product offers; it does not cover any other uses that you may wish to use the data for. This consent must be gained before the storage of or access to any information on their device by means of a clear affirmative action, such as an opt-in.
The PECR and the UK GDPR work alongside each other, so it is important to be aware of any data protection obligations you may also have. If users of your online service can be singled out using information such as their IP addresses, cookie identifiers or MAC addresses, either on their own or in combination with other information, then your processing must also comply with UK GDPR. This is the case even if you cannot link the user to a named, real-world individual.
Next steps
Privacy and Electronic Communication Regulations
Review the ICO’s Direct marketing guidance and make sure appropriate consent is sought for marketing activities
Review the ICO’s guidance on Cookies and similar technologies.
Personal data
Review the ICO’s guidance on What are identifiers and related factors?
Automated decision-making
Q. Would the assignment of a ‘Quality Indicator Score’ constitute profiling?
Context: The organisation are conducting a research project using a range of publicly available personal information about practitioners in a regulated industry. They will use this information to create ‘Quality Indicator Scores’. These will feed into an anonymised report that explores variation across the sector.
Answer: Profiling under the UK GDPR refers to automated processing of personal data to evaluate or predict aspects of an individual’s behaviour, such as work performance, interests, or reliability. This project involves collecting data on individuals, analysing past work behaviours, and generating a “quality indicator score” to predict future performance. This activity meets the definition of profiling because it uses computational analysis to assess and predict future behaviour, ie how well a professional is likely to perform.
As the organisation will be profiling individuals, they must consider whether Article 22 of the UK GDPR applies. Article 22 restricts decisions based solely on automated processing, including profiling, if these decisions have legal or similarly significant effects on individuals. If profiling lacks meaningful human involvement and significantly impacts individuals, Article 22 applies, and processing can only occur under one of three exceptions: necessity for a contract; authorization by law; or explicit consent from the individual.
Next steps:
- Read our guidance on automated decision-making and profiling.
Q. Would “legitimate interests” be a suitable lawful basis when using generative AI systems to help draft responses to clients and prospective clients?
Context: The organisation wants to use a generative AI tool to draft responses to emails, which will then by reviewed by a member of staff. In some cases these emails may contain special category data.
Answer: To rely on legitimate interests, organisations must demonstrate that the use of generative AI tools is necessary for the purposes of the legitimate interests they have identified, except where the legitimate interest is overridden by individuals’ interests, rights or freedoms. This requires the organisation to demonstrate that the use of generative AI tools is proportionate, and would not infringe on the rights of their clients or prospective clients.
Consent may also be an appropriate lawful basis where organisations have a direct relationship with the individuals whose information they want to process. When relying on consent, an organisation must ensure that it is freely given, specific, informed, and involve an unambiguous opt-in. They would also need to make it easy for individuals to withdraw consent at any time.
If processing contains special category data, the organisation must ensure it has a second condition for processing in place, as required by Article 9 of the UK General Data Protection Regulation (UK GDPR). Based on the use of the generative AI tool in this case, a suitable Article 9 condition could be explicit consent.
As well as the conditions required for consent outlined above, explicit consent also requires specific confirmation through a clear statement that is separate from any other consents. We would recommend that an organisation looking to rely on consent and explicit consent does not include the explicit consent request within their contracts. Consent should also not be a requirement of using the service.
Next steps:
- Review the ICO’s blog on generative AI, and the guidance on AI and data protection and explaining decisions made with AI.
- Read our guidance on legitimate interests, consent, and special category data.
Q. We want to develop a speech transcription service for use in our organisation, using an open-source artificial intelligence (AI) model. Can we do this even though we don’t have detailed information about how the model was trained?
Context: The organisation wants to develop a speech transcription tool for their internal use. They want to develop their tool using an open-source AI model developed by another provider. The provider is not acting as a processor or joint controller for the organisation.
Answer: Data protection law does not prevent an organisation from using an open-source AI model even if they are unclear on how exactly the model was trained. However, we recommend that appropriate due diligence checks are carried out and documented.
An organisation is responsible for complying with data protection legislation for any personal information it uses. It is not responsible for any processing carried out by a third party, unless that third party was acting as a processor under its instructions, or they were acting together as joint controllers.
Organisations using AI models need to satisfy themselves that any model they use or develop complies with the data protection principles, in particular accuracy and fairness. They need to be able to explain to individuals how their use of the AI model complies, and ensure that individuals can easily exercise their rights under data protection law.
Organisations must also satisfy themselves that any personal information they process is kept secure. They should carry out a security assessment, and ensure that they are aware of any identified vulnerabilities, for example by subscribing to security advisories, and checking that patching and updating processes are in place.
Where organisations intend to share personal data with providers of AI models, it is important that they follow rules around data sharing. In particular, they must follow rules around international transfers where they are sharing personal data with anyone based outside of the UK (including cloud providers).
Next Steps:
- Review our guidance on processors and controllers.
- Review our guidance on accountability and governance in AI.
- Review our guidance on security, including cyber security.
- Review our guidance on data sharing.
Q. Do we need to identify a new lawful basis to use an AI tool for meeting recordings, taking notes and creating draft documents?
Context: The organisation wants to use an AI business tool for meeting recordings, summarising, taking notes, and creating draft documents. They are not using the tool for any new purposes or activities, or to help make decisions about people. They are not sharing any personal data with the provider of the tool that the provider will use for its own purposes, for example, to train the model further.
Answer: Organisations must identify a lawful basis for each of the different purposes for which they use people’s personal data. The organisation is not using the AI tool for any new purposes or for any new processing activities, and as such, they can continue to rely on the lawful bases they have already identified for each processing activity. Similarly, where they are already processing special category data, they do not need to identify a new condition for using the AI tool to help them.
If an organisation decides to use an AI tool for new processing activities, or for new purposes, they must break down, and separate each distinct processing activity, and identify the purpose and lawful basis/ condition for each one.
It is important that people are given sufficient information about how and why their personal data is used. Where an AI tool is used, privacy notices should be updated to inform people about how it is being used, and for what purposes. If organisations want to use people’s information in new ways, they should inform them about this before they start the processing.
Personal data must be accurate, and where necessary, kept up to date. When using an AI tool, organisations must check that the outputs are accurate, and take steps to correct any inaccurate information. Where organisations intend to use an AI tool for decision making purposes, it is also important that they are aware of restrictions on automated decision making and profiling.
Where organisations share personal data with the provider of an AI tool, for example, to help to train the model further, they must ensure that the data sharing is compliant with data protection law. They must identify an appropriate legal basis, and special category condition where necessary, for the data sharing.
Next steps:
- Review our guidance on lawful basis
- Read about the accountability and governance implications of AI.
- Read our guidance on automated decision making and processing.
Q. What are the rules on international (restricted) transfers when using third-party suppliers based in the USA?
Context: The organisation provides a suite of revision services to students, and is looking to work with two companies (Company A and Company B) based in the USA to deliver these services. Company A provide access codes to the organisation who pass these to the students. The students then use these codes to set up a profile with Company A to access the revision services. The organisation will share personal information about their clients directly to Company B.
Answer: Even though personal information is not provided directly to Company A by the UK organisation, this would still be a restricted transfer. This is because the UK organisation’s customers enter into a contract with them only, and they remain the controller for the customer’s information. It is the organisation’s choice to use Company A’s services, not the customers.
As the information is transferred directly from the organisation to Company B, this would also be a restricted transfer.
As the USA is not currently covered by an adequacy agreement, appropriate safeguards are required when completing these transfers. Organisations must consider the safeguards and exceptions available under data protection legislation before agreeing to send personal information to countries not covered by an adequacy agreement. One way of ensuring these safeguards are in place is to use standard data protection clauses. These are clauses included in contracts between two organisations that impose obligations on both organisations to ensure personal information is protected. These can be imposed through the use of international data transfer agreements (IDTAs).
The first step in using standard data protection clauses is to complete a transfer risk assessment. This should help determine whether the personal information transferred will continue to be protected in line with UK data protection rules. It should be noted that transfers should only be used where necessary. If there is a way of achieving the same outcome without transferring personal information (eg by using anonymised information) this should be completed.
Next steps:
- Review the guidance on international transfers, and international data transfer agreements
- Complete a transfer risk assessment
Note: This advice was written before qualifying transfers to the USA were covered by the UK Extension to the EU-US Data Privacy Framework. More information about this can be found in our guidance on international transfers. The advice given here would still apply to information organisations wish to transfer to third countries and organisations not covered by adequacy regulations.
Q. Does processing personal data of overseas employees of third party UK organisations count as a restricted transfer?
Context: The organisation processes payment data from a number of organisations and is looking to update their platform. For most processing activities the organisation acts as a processor. However, they act as a controller where data is collected about employees of organisations using their system. Although the new system will be for UK organisations, some employees may be based outside the UK.
Answer: Where an organisation receives personal information from a third party, transfer rules under the UK General Data Protection Regulation (UK GDPR) (Article 44) do not apply. The rules contained in the UK GDPR regarding the transfer of personal data apply only when personal information is transferred by a controller or processor to a separate organisation located outside of the UK.
In this case, a UK organisation will share employee details with the organisation that submitted the question for the purposes of accessing the primary organisation’s services. If the UK organisation’s offices located in the third countries are part of the same legal entity, and the UK GDPR is in scope because the processing activity is an activity of a UK establishment, it is unlikely to be a restricted transfer.
Next steps:
- Review the ICO’s guidance on international transfers
Q. Could payment data be seen as special category data where it relates to payments to or from certain categories of organisation (eg political party, health organisation or trade union)?
Context: The organisation processes payment data from a number of financial organisations and is looking to update their platform.
Answer: Where payments are made to or from organisations, such as health organisations or trade unions, it may be possible to infer details about a person related to special category data. However, this would only count as special category data if the inference can be drawn with a reasonable degree of certainty, and where it is deliberately made. We would also consider this special category data if the inferences are used to treat an individual differently.
In the case of payment data as described to us here, there is not enough information collected to say with certainty what the payments are for, and the organisation is not deliberately seeking to make inferences about individuals, nor does it influence their activities in any way. Therefore, this is not special category data, and there is no need to identify an Article 9 condition to use payment data.
Next steps:
• Review our guidance on special category data
Q. Are there any measures you should consider when storing data in the cloud?
Context: The organisation processes payment data from a number of organisations and is looking to update their platform.
Answer: The ICO has published cloud computing guidance which outlines what an organisation should consider when using a cloud service. The ICO has also published updated guidance on security and encryption requirements under the UK General Data Protection Regulation. This will to help organisations understand their responsibilities when storing information in the cloud, as well as provide guidance on security and effective encryption requirements.
There are also requirements for the use of cloud systems under the Network and Information System (NIS) Regulations. These regulations are designed to oversee the NIS Regulations. These are designed to address the threats posed to network and information systems, and in doing so ensure that the digital economy can function efficiently.
Any organisation that offers something-as-a-service should review the NIS Regulations to see whether they need to comply with them.
Next steps:
- Read the cloud computing guidance
- Review the ICO’s security guidance
- Identify whether you need to comply with the NIS Regulations
Q. Can personal data be shared with a company for the purpose of improving the performance of an artificial intelligence model?
Context: The organisation that submitted this request is looking to use an artificial intelligence (AI) powered Microsoft Word Add-In that can help with a variety of tasks. They want to know whether they would be able to share information outside of the model, not where the model learns as the software is used. The information that may be shared could include both special category data and criminal offence data, so is particularly sensitive.
Answer: By data sharing we mean the disclosure of data from one or more organisations to a third party organisation or organisations.
Data protection law facilitates data sharing when it is fair and proportionate.
- The accountability principle means you are responsible for compliance, and must be able to demonstrate compliance.
- Personal data must be shared fairly and transparently.
- You must identify at least one lawful basis for sharing information before you start any sharing.
Before sharing personal data, an organisation needs to ensure that the information they share will only be used for a specified purpose, that it will be stored securely, and will not keep it for any longer than necessary.
If an organisation is able to anonymise the information, or remove identifiable information from the documents shared, then they should do so. This would minimise the personal data shared and therefore reduce any risks.
If an organisation is unable to effectively anonymise the information, or needs to include personal data in the information they share, the ICO’s data sharing code of practice recommends the use of ‘data sharing agreements’. These set out the purpose of the data sharing, cover what happens to the data at each stage, set standards and help all the parties involved in sharing to be clear about their roles and responsibilities.
Next steps:
Q. How much detail should be revealed when sharing information in data clean rooms?
Context:
The organisation is looking to use data clean room technology to help their clients share information about vulnerable customers. This would help these organisations exercise their duty of care obligations more effectively.
Answer:
When looking to share information, organisations should consider what is necessary to achieve their purposes. This should be in line with the data minimisation principle, which states that personal data should be: “adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed”.
When using data clean rooms to share information between organisations about vulnerable customers, the list each organisation shares should be kept separate. Each organisation should only be able to access their own list of customers. Instead, the data clean room will be able to confirm when organisations have information about the same customer when queried.
However, the organisation operating the clean room should be mindful of the risks that customers could be identified, such as by linking the vulnerability data with other available information held by the organisation. Organisations should also be mindful of risks of inferring identity. For example, if an individual had a unique set of characteristics that would allow for them to be singled-out.
Risks of linking the data and inferring identity should be mitigated through the adoption of appropriate technical and organisational measures. Suitable techniques may include private set intersection or homomorphic encryption. These will help reduce the risk of identification and keep the data held secure, which will, in turn, help you comply with the data protection principles.
Next steps:
- Read our guidance on the data minimisation principle
- Review our guidance on encryption and anonymisation, pseudonymisation and privacy enhancing technologies.
Q. Would current data sharing agreements be sufficient if the intended recipients are not listed under the existing agreements?
Context:
The organisation would like to use information from a shared multi-organisation database to create risk rankings, and share these with a third party, to help protect vulnerable service users. These rankings will not be linked to individual names, but rather to “Unique Property Reference Numbers” (UPRNs). There are current sector specific data sharing agreements in place for the information on this database . However, the proposed recipients are not listed in these agreements.
Answer:
The UK GDPR defines personal data as: “any information relating to an identified or identifiable natural person”. Therefore, even though individuals are not identified by name, the rankings shared would still constitute personal data, as individuals could be identifiable.
When considering if current data sharing agreements would cover the creation and sharing of risk rankings, organisations need to assess whether the new use is compatible with its previous use. This is in line with the “purpose limitation” principle. This principle states that information must be “collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes”.
Organisations should carry out a compatibility assessment to determine whether they can use the information on the database to calculate risk scores. If the information they use in these calculations is received from other organisations, it would be good practice to inform them of this change.
When sharing personal information with third parties, it is good practice to have a data sharing agreement in place. The agreement should identify all the organisations that will be involved in the data sharing, and contain procedures for including additional organisations in the data sharing.
If the agreement an organisation has in place does not adequately cover the new sharing, or the organisations they would like to share with, the agreement should be reviewed and updated accordingly.
Our data sharing code of practice outlines the steps you should take and the things you should consider when sharing personal data with third party organisations. This includes a template and checklist for data sharing agreements.
Next steps:
- Read our guidance on the definition of personal data
- Review our purpose limitation guidance
- Read our data sharing code of practice
Q. Is legitimate interests an appropriate lawful basis when sharing information to set up an alert system for vulnerable service users and their families?
Context: The organisation wants to set up an alert system for vulnerable service users of member firms. It would provide organisations with the opportunity to perform additional vulnerability checks. The alerts will contain the category of vulnerability, and will be based on the customer’s address, not their name. The alert would last for 50 days.
Answer: From the information provided, it appears legitimate interests could be a suitable basis. However, before processing begins organisations should undertake and document a three-part legitimate interests assessment:
- Firstly, organisations must have a legitimate interest. This interest does not need to be very compelling, and can be a commercial interest. In respect of this processing, the organisation has an interest in protecting vulnerable customers.
- Secondly, the processing must be necessary to achieve the legitimate interest. This means the processing must be a targeted and proportionate way to meet the interest, and not able to be achieved in a less intrusive way. Considering data minimisation and storage limitation is important here.
- Finally, organisations must perform a balancing test, where they take into account the interests, rights and freedoms of the customers, and check that these do not override the organisation’s interests. Organisations should consider the reasonable expectations of the customers, and the likely impact this will have on them. As the customers are vulnerable, the organisation should consider whether the severity of any impact on them may be greater.
The organisation needs to be conscious that the alert could stop or delay a genuine request, which could result in harm. They should consider any other impacts the processing may have, and assess the severity of the harm they may cause. These should then be balanced against the benefits of the processing, and any safeguards that could be put in place considered.
Organisations must be satisfied that there won’t be any unjustified impact on the customer in order to pass the balancing test.
This project involves processing information about vulnerable individuals, possibly including special category data, which could lead to some individuals not being able to access services. We therefore recommend the completion of an impact assessment, which will help analyse, identify and minimise risks.
Next steps:
- Review our guidance on the legitimate interests
- Complete a legitimate interest assessment
- Review our guidance on Data Protection Impact Assessments.
Q. How can we ensure that free text information is effectively anonymised so that it can be shared with researchers?
Context: The organisation is planning to share anonymised free text information with researchers. They plan to use technical measures to ensure at least 95% of labelled identifiers are removed, with manual checks and other measures in place to ensure that the remaining information is adequately anonymised. However, they would like advice on any additional measures that they can put in place.
Answer: When looking to anonymise personal data, organisations need to consider the likelihood of reidentification based on factors such as what the information is being used for, costs and time required to identify, the available technologies, and the state of technological development over time. Therefore, in order to effectively anonymise information, organisations need to remove anything that could be used to identify a person. This could be their name, address, date of birth, or an identification number, for example.
Organisations should also consider whether they need to remove information about people’s appearance, mental capacity, or social identity. This information could be used to reidentify individuals, particularly where outliers may exist. Depending on how gender and ethnicity are defined in records, this could be used to single out individuals.
One way of reducing the risk of reidentification could be to group some information, such as age, such that the exact information is not provided within the records. We recommend that organisations review how information that could be used to identify individuals could be aggregated or altered to mask identification.
As well as anonymising information, we recommend the use of “motivated intruder” tests, with trained staff members attempting to identify individuals from the anonymised records. Organisations should ensure that when these tests are carried out, their testers have access to all the resources an attacker would be able to access. This could include internal databases.
We recommend that organisations conduct regular checks on the effectiveness of their anonymisation process. New techniques for reidentifying individuals within datasets may be developed at any time, and new information is regularly made available. These checks will help keep processes up to date and effective.
Next steps:
- Read our guidance on anonymisation.
Q. Would sharing of anonymised special category data for educational purposes be considered compatible with previous uses of the information?
Context: The organisation has a partnership with an educational organisation that uses some information for research purposes. The organisation already uses this information for similar reasons in-house, but would like clarification as to whether this information could be shared also.
Answer: Where information is fully anonymised, it is no longer subject to data protection legislation, as it is no longer classed as personal data. An organisation would therefore not need to comply with data protection legislation in order to use anonymised information for educational purposes.
Where:
- sufficient measures are taken to anonymise information;
- the information is regularly tested and checked;
- the results of these tests are documented appropriately; and
- ICO guidance is followed when conducting these tests,
we would consider information to fall outside the scope of data protection legislation.
In order to ensure the information remains anonymous in the future, we recommend organisations implement appropriate technical and/or organisational measures to ensure the information cannot be extracted in any way.
All measures taken to ensure the information remains anonymous should be reviewed and updated regularly. This will help protect against the emergence of new threats, or the risk of identification within new datasets. Organisations should also assess the usefulness of new anonymisation techniques (such as synthetic data with differential privacy to further reduce risk to individuals) when they become available.
Next steps:
- Review our guidance on anonymisation.
Q. When can an organisation rely on exemptions for research, serious harm, or manifestly unfounded or excessive requests when responding to subject access requests (SARs)?
Context:
The organisation are planning to set up a trusted research environment (TRE) to support their research programme. They will collect a large amount of information from participants, which could include information about their health. They expect to engage with a large number of participants, and are concerned that the number of subject access requests they could receive could impact on their ability to respond within timescales set out in the legislation.
Answer:
The UK GDPR and DPA18 outline several exemptions that could be applied when an individual submits an information request.
There are specific restrictions on disclosing an individual’s health data as part of a SAR response, if this information is not already known to them. Health data can only be released after a suitable health professional has indicated that the individual will not suffer serious harm if the information is provided. This opinion must have been provided within the six months prior to the request.
There is an exemption to the right of access specifically related to research. This applies where providing information in response to the request would prevent or seriously impair the research in question. Organisations must ensure that appropriate safeguards for individual rights and freedoms are in place, and that they are not using the results to make decisions about individuals, before they consider using this exemption.
A request can also be refused if it is manifestly unfounded or excessive. Like the exemptions, organisations need to review each request on a case by case basis before they can refuse for these reasons.
A manifestly unfounded request is one that is made by someone who has made it clear that they do not really want to exercise their rights, or who has made a request just to harass or disrupt an organisation.
In order for a request to be excessive, the nature and context of the request should be taken into account. For example, a request that overlaps with or largely repeats a previous request may be considered excessive. A request for a large amount of information should not be automatically considered to be excessive.
Next steps
- Read our guidance on data protection exemptions
- Review additional information about manifestly unfounded or excessive requests
Q. Can data clean rooms be used to compare lists of vulnerable customers?
Context:
The organisation is looking to use data clean room technology to help their clients share information about vulnerable customers. This would help these organisations exercise their duty of care obligations more effectively.
Answer:
The use of a data clean room could help organisations comply with the data minimisation principle, and demonstrate the use of data protection by design and default.
The data minimisation principle states that personal data should be: “adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed”. Any organisation operating a data clean room should ensure that only the information necessary is shared between participating organisations.
Data protection by design and default means that an organisation should put in place appropriate technical and organisational measures to ensure the data protection principles are effectively met. It also means that organisations should integrate safeguards into their processing so that they meet the UK GDPR’s requirements and protect individual rights. Any organisation looking to develop clean room technology for this purpose should ensure that this use is built with data protection by design and default in mind.
When facilitating the sharing of information between multiple organisations, we recommend the use of data sharing agreements. These agreements outline why the data is being shared, what happens to the data at each stage, and sets standards for the use of the data. They help clarify the roles and responsibilities for organisations in the data sharing process. By using data sharing agreements, the developer of a data clean room should be satisfied that those using the technology only use it in line with the data protection principles.
Next steps:
- Read our guidance on the data minimisation principle
- Understand the obligations of data protection by design and default
- Review our data sharing code of practice
Q. Would images captured by an AI enabled camera constitute biometric data?
Context:
The organisation would like to use AI software to flag when there has been an accident and an individual needs help. The distance between the cameras and the area being reviewed is large and the staff are required to wear PPE, so individual identification is unlikely. The software is designed to flag when a human-shaped object has been involved in an accident and will not be used to identify individuals.
Answer:
The UK GDPR describes biometric data as “data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person”. Examples of this include, but are not limited to, facial recognition or fingerprint data.
Images of people do not automatically count as biometric data, even if individuals can be identified. Images become biometric data after ‘specific technical processing’ is carried out. For example, where a template or profile is created to allow for the use of facial recognition technology.
As the images are not of good enough quality to identify individuals, and the purpose of the software is to help with rescues, not identification, it is unlikely that this would be considered biometric data.
As with all decisions organisations make about processing personal data, organisations using this software should document why they do not believe this use would constitute biometric data. If the use of this software changes in the future, this documentation should be reviewed and modified as required.
Next steps:
Review our guidance on biometric data
Q. What would constitute a “compelling reason” for updating default settings for children?
Context:
An organisation is updating their recommender systems and would like guidance on whether factors such as oversight from adult account holders, editorial controls and regulatory oversight would constitute a “compelling reason” for turning the system on by default for children. There would still be an option to opt out, but the organisation feel that the application of the system would be in the best interest of the child and therefore comply with the Children’s Code.
Answer:
The first standard of the Children’s Code states: “The best interests of the child should be a primary consideration when you design and develop online services likely to be accessed by a child.”
This means that any changes to services should consider the interests of child users in the first instance. For example, if the changes were designed to prevent access to inappropriate content, this could be considered in the best interests of a child.
The default settings standard states: “Settings must be ‘high privacy’ by default (unless you can demonstrate a compelling reason for a different default setting, taking account of the best interests of the child).” This means that if an organisation wants to turn on a system by default for children, they need to have a compelling reason to do so, in the best interests of the child. It would not be in a child’s best interests if a system was designed to keep them engaged longer, or to profile them for advertising purposes.
A “compelling reason” means a reason for processing personal data in a specific way that goes above and beyond general business reasons. For example, if turning on the system by default would help protect children.
Where a system may be accessed by children of various ages, consideration should be given to their different development levels and needs when developing new products. For example, teenagers will require a different level of explanation and autonomy than younger children.
We would not consider the factors mentioned in the “context” section above to constitute a “compelling reason” for a system to be switched on by default, for any age group.
Next steps:
- Review our Children’s Code, in particular sections on:
- Consider the supplementary advice in the best interests framework
Q: Can an artificial intelligence produced greeting in a call handler’s voice be used in a marketing call without falling in scope of Regulation 19 of PECR?
Context: The organisation is developing software that uses AI to synthesise a three to five second recording of a call handler’s voice. This would be played at the start of a call whilst it is established whether the call has been answered by a person or an answering machine has answered the call. Once a person has been identified on the line, the call handler would then take over the call. This would help comply with rules on silent calls.
Answer: Regulation 19 of PECR covers the use of automated calling systems. These are defined as systems that are capable of:
- automatically initiating a sequence of calls to more than one destination in accordance with instructions stored in that system; and
- transmitting sounds which are not live speech for reception by persons at some or all of the destinations so called.
The system the organisation described to us would fall under this definition.
This regulation states that organisations should not “transmit, nor instigate the transmission of, communications comprising recorded matter for direct marketing purposes by means of an automated calling system” unless they have consent from the recipient to do so.
This means that no part of a direct marketing call can include recorded material without consent from the individual receiving the call. As the marketing calls the organisation is proposing to make would contain some recorded matter, the use of this new software would not comply with Regulation 19 unless they have obtained specific consent.
We understand that the development of this system is to help ensure compliance with rules on silent calls. However, organisations must ensure that they are complying with all relevant regulations when making direct marketing calls.
Next steps:
- Read our guidance on telephone marketing
Q. Can the use of cookies for providing a reward service qualify the cookies as ‘strictly necessary’, exempting them from the consent requirement under PECR?
Context: The organisation uses cookies to track customer purchases in order to pay them rewards. The customers have signed up to this service with this organisation, and make purchases on participating merchants.
Answer: In order for a cookie to be "strictly necessary", the purpose for which it is used must be essential to provide the service the subscriber or user requests.
For example, when someone signs up to an online service offering cashbacks or rewards, it is likely that some cookies set by that service would be essential as the user has requested that service.
However, cookies that may be strictly necessary to provide one service are not automatically strictly necessary to provide another, different service.
Whether cookies used in the context of an online service that offers rewards or cashbacks meet the ‘strictly necessary’ exemption therefore depends on:
- how that service operates, including what cookies are set, by whom and in what circumstances;
- the different ways in which users can engage with that service;
- the arrangements the service may have with other parties (eg merchant or retail partners); and
- the processing operations of any cookies or similar technologies they use.
Where services partner with each other, it is important that they work together to determine their respective compliance obligations.
In addition, use of the cookie must be limited to the purpose which is necessary to provide the service. If it is used for other, secondary purposes then the exemption may not apply.
Next steps:
Facial recognition technology
Q. Is consent the most appropriate lawful basis for schools using AI driven media management?
Context: The organisation has developed a media management system for schools that uses facial recognition technology to ensure media is used appropriately. They will act as a processor for schools that use their system. They want to know when consent is appropriate in order to advise schools accordingly.
Answer: When schools publish photos and videos, they may do so for a variety of reasons and therefore a range of lawful bases may be suitable.
Where schools choose to rely on consent as their lawful basis, they should break down the different ways in which the school intends to use photos/ videos and give choices about these. For instance, a school may offer separate choices around the use of photos in school newsletters, in publicity materials such as a prospectus, and in the media.
Where automatic facial recognition technology is used by the school, it is likely that the system will process special category biometric information. Schools must therefore identify a condition under Article 9 of the UK GDPR, as well as a lawful basis. Schools must usually obtain explicit consent when processing special category biometric data.
As the media management system contains an automatic facial recognition feature, schools need to be aware of the rules within the UK GDPR on the use of automated decision making.
These tools should not be used to make decisions that have a legal or similarly significant effect about an individual without review by a human, unless specific conditions apply. These conditions include where the individual has consented to the use.
There is no set definition of “legal or similarly significant effect” within the UK GDPR, but schools should give consideration to the safety of pupils, for example where there are specific measures in place to protect a child’s identity from disclosure.
Schools also need to be able to explain to students and parents how the tools are used. They should understand what information is used, how the system ensures identification is accurate, and what their rights are, among other things.
Next steps:
- Read our guidance on consent
- Understand the requirements when special category data, such as biometric information is used
- Ensure that there are appropriate safeguards are in place when using children’s information
- Read our guidance for schools on taking photographs
- Review our guidance on artificial intelligence and automated decision-making