The ICO exists to empower you through information.

This section is aimed at senior management and those in compliance-focused roles. This includes data protection officers (DPOs), who are accountable for the governance and data protection risk management of an AI system. You may require a technical specialist to explain some of the details covered in this section. 

Control measure: There is a documented and embedded privacy management framework endorsed by senior management that supports the AI system’s development, use and oversight.

Risk: Without a privacy framework in place and a lack of management focus on data protection when making decisions in the use of AI, there is a risk of a personal data breach caused by staff being unaware of their responsibilities. This may breach UK GDPR article 5(2).

Ways to meet our expectations:

  • Ensure the overall governance and privacy management framework supports the compliant use of AI systems.
  • Include appropriate technical and organisational measures into the framework that are designed to implement the data protection principles in an effective manner.
  • Evidence that senior management have seen and signed off the risks associated with using AI. 
  • Appoint a DPO, or a nominated data protection lead, with designated responsibility for overseeing AI systems.
  • Assign technical and operational roles and responsibilities to ensure the effective management and security of AI systems and personal information they hold.
  • Document in policy the privacy measures in place for information processing and for ongoing training, testing or evaluation of an AI system or service. 
  • Support policies with operational procedures, guidance or manuals to direct operational staff on using AI systems and applying data protection law.

Options to consider:

  • Assign responsibility for general oversight of AI systems to a steering group, committee, meeting or equivalent. 
  • Build a bespoke framework specifically for developing or using AI, alongside the main privacy management framework.
  • Test the framework regularly to ensure it remains fit for purpose, particularly as AI design and functionality becomes more complex or widely used.
  • Define roles and responsibilities in job descriptions, team structures and organisation charts.

 

Control measure: The purpose of the AI system and the most important criteria in the system specification and testing is considered and documented within a data protection impact assessment (DPIA).

Risk: Without an evolving and responsive risk management strategy, there is a potential for static risk treatment. If processing within AI systems takes place before a DPIA, or before mitigating controls are put in place, then there is a greatly increased risk of a personal data breach as information is being processed without risk assessment or control. This may breach UK GDPR article 35. 

Ways to meet our expectations:

  • Document the DPIA policy or process, with supporting templates and guidance to help staff complete an effective DPIA that meets the UK GDPR requirements (article 35).
  • Complete a DPIA before starting the processing.
  • Consult internal stakeholders, technical specialists within AI product teams and the public, as part of the DPIA assessment, as appropriate.
  • Share completed DPIAs with senior management and get sign off on the outcome of the assessment.
  • Act on the outputs of the DPIA to effectively mitigate or manage any risks it identified.
  • Review DPIAs, particularly when there is a change to processing, to ensure they remain accurate and up-to-date.
  • Implement an effective risk management strategy to help to formally document the risks of using AI systems. Ensure they are tracked and managed at a corporate level through an appropriate risk register.
  • Mitigate privacy risks effectively and in a timely manner through ongoing AI system development and enhancements.
  • Ensure there is proactive engagement between all parties as part of the procurement process to appropriately assess the risk of the AI system.

Options to consider:

  • Create risk assessment templates for staff to use so there is consistency and you meet all the legal requirements.
  • Build screening checklists so that developers can consistently assess the need for a DPIA.
  • Build the requirement for a DPIA into system design and development processes.

 

Control measure: There is a programme of risk-based audits in place to periodically assess AI systems compliance with data protection law and internal privacy policies.

Risk: Without an audit programme, there may be a lack of assurance that risk management is sufficient or effective. If audit findings are not properly reported to oversight and governance bodies, they will not have the correct information to make the necessary decisions, potentially leading to data breaches. This may breach UK GDPR articles 5(1)(f) and 5(2).

Ways to meet our expectations:

  • Employ an external auditor to provide independent assurance (or certification) on your compliance with data protection law and information security.
  • Maintain a central audit plan.
  • Introduce a programme of internal audits to periodically assess AI systems compliance with data protection law.
  • Produce audit reports to document the findings from audits and share with senior management.

Options to consider:

  • Use externally provided self-assessment tools to provide assurance on your compliance with data protection law.
  • Adhere to an appropriate code of conduct.
  • Complete routine compliance checks to test staff compliance with data protection policies and procedures.

 

Control measure: Change management processes are documented to ensure that new versions or change releases to AI systems are managed effectively by all parties.

Risk: If there is no effective change management in place, the release of a new system update could cause significant risk of a personal data breach or damage to personal information if a problem occurs during the release. This may breach UK GDPR articles 5(1)(f) and 5(2).

Ways to meet our expectations:

  • Implement measures to control the release of any changes or new versions of your system, software reconfiguration, or security patch applications.
  • Design and document an agreed communication plan so that all parties understand the impacts of the change(s) and are able to reassess any potential privacy implications.
  • Log all changes made, patches applied or new versions released (when and who to) and ensure historical information on these changes is easy to locate, if required.
  • Investigate when overly-frequent updates or releases are happening, as this could suggest a lack of internal checks or sign-off before each one.
  • Review contracts and contract service level agreements (SLAs), following any significant changes.
  • Plan version releases or changes in advance (including software reconfiguration, or security patch applications) to allow time to educate and train the deployer or client on what they mean in practice.

Options to consider:

  • Delay the release of changes, new versions, reconfigurations or patches until appropriate consultation with deployers or clients has happened.
  • Assist deployers or clients with any updates to existing DPIAs.

 

Control measure: Information flows across the entire supply chain have been comprehensively mapped.

Risk: Without fully understanding how information is processed, neither provider nor deployer can assure themselves that they have an effective information governance regime in place. This may breach UK GDPR articles 5(1)(f), 5(2), and 30.

Ways to meet our expectations:

  • Implement a process to ensure all processing activities are documented accurately and effectively.
  • Conduct information audits (or information mapping exercises) to find out how information moves across the supply chain and where it originates from.
  • Maintain an internal record of all processing activities (ROPA).
  • Highlight information flow processes within the privacy management framework and encourage maintaining an updated version of this document. 

Options to consider:

  • Devise a questionnaire to distribute across the supply chain to identify where information originates from and under what lawful basis it is collected.
  • Meet directly with key business partners to gain a better understanding of how certain parts of the supply chain source and use information.
  • Document processing activities in electronic form so information can be added, removed and amended easily.

 

Control measure: The most appropriate article 6 lawful basis (and article 9 or 10 condition where required) are identified and justified for each information processing activity within the AI system.

Risk: Failing to choose an appropriate lawful basis risks the unlawful collection and use of personal information. As a consequence, people may lose trust over how their information is used and suffer from unfair processing. This may breach UK GDPR articles 5, 6, 9, and 10.

Ways to meet our expectations:

  • Review your processing activities to determine the most appropriate lawful basis (or bases) for each activity.
  • Document each lawful basis and the justification as part of a DPIA, before processing. Assess whether the processing is a targeted and proportionate way of achieving a specific purpose and whether you could achieve the purpose by some other less intrusive means, or by processing less information.
  • Determine an appropriate lawful basis, if you use personal information to train or develop any aspect of the AI before deployment. Follow the principles of data minimisation for these activities and do not repurpose this information for another use. 
  • Identify and document an additional article 9 or 10 condition, if processing special category or criminal offence information.
  • Assess whether any special category information is accidentally created or collected. If so, delete it.
  • Cease processing and delete any information if you are unable to identify an article 6 lawful basis and article 9 condition.
  • Document the lawful basis you are relying on for processing children’s information. Apply additional safeguards and security measures before processing.
  • If you rely on the outputs of an AI system to support marketing activities, ensure that you:
    • appropriately obtain the personal information (in line with your privacy notice); and 
    • use an appropriate lawful basis to process the personal information.
  • Review documented lawful bases to check that the relationship, the processing and the purposes have not changed.

Options to consider:

  • Train staff who make decisions about the most appropriate lawful basis or condition.

 

Control measure: A legitimate interests assessment (LIA) is undertaken where there is a reliance on legitimate interests as a lawful basis.

Risk: Reliance on an inappropriate lawful basis for processing results in a potential failure to fulfil the necessary requirements. This may breach UK GDPR article 6.

Ways to meet our expectations:

  • Ensure the LIA considers the following:
    • Not using people’s information in ways they would find intrusive or which could cause them harm, unless there is a very good reason.
    • What extra steps to take when processing children’s information to make sure their interests are protected.
    • What safeguards to introduce to reduce the impact where possible.
    • Whether you can offer an opt out.
    • Whether a DPIA is required.
  • Consult with key technical staff such as system developers when completing the LIA.
  • Complete the LIA before starting the processing and document the decision and the assessment.
  • Periodically assess the model usage to ensure the purpose remains the same and necessity and legitimate interests are still valid.

Options to consider:

  • Consult with people to ensure they understand how legitimate interest applies to their personal information.

 

Control measure: Where consent is used as a lawful basis, consent mechanisms comply with article 7 requirements.

Risk: When the consent obtained is not valid, it risks personal information being processed in breach of UK GDPR articles 6 and 9.

Ways to meet our expectations:

  • Inform people about processing in the AI system before asking them to give consent (or consent may not be valid).
  • Ensure consent mechanisms are specific, granular, and separate from other terms and conditions or acceptances.
  • Provide clear, prominent, positive opt-ins that are freely given and not a pre-condition of signing up to a service or pre-ticked boxes.
  • Ensure you actively provide information to people when you seek consent about how they can withdraw that consent.
  • Keep records or a log of when and how you obtained consent.
  • Review consent regularly to check that the relationship, the processing, and the purposes have not changed.
  • Implement processes to refresh consent at appropriate intervals.
  • Implement dynamic consent mechanisms that allow people to control and revise their consent preferences over time. 

Options to consider:

  • Use ‘just-in-time’ notices on screen at the point the person inputs the relevant information, with a brief message about what you will use the information for (while avoiding making it unnecessarily disruptive to using the service).
  • Use an appropriate cryptographic hash function to support information integrity for online consent.
  • Send people occasional reminders of their right to withdraw consent and how to do so, if you’re not in regular contact with them.

 

Control measure: There is a comprehensive and effective approach in place to ensure: 

  • personal information has not been repurposed beyond its original purpose; or 
  • there has been a change in lawful basis within the information supply chain in order to build or train the underlying technology.

Risk: If there is an inappropriate change in the lawful basis for processing within the supply chain (eg information originally collected under consent is now being processed under legitimate interests), then there is a risk that this will be unlawful. Without due diligence or a review to identify the issue, personal information may be processed for purposes other than those for which it was collected. This may breach UK GDPR article 5(1)(b).

Ways to meet our expectations:

  • Check the purposes and lawful basis when information was collected and ensure they have not changed when developing the AI system.
  • Undertake due diligence when sourcing information indirectly to train your AI system to check:
    • under which lawful basis the information was originally collected;
    • what privacy information was provided to support its repurpose; and 
    • there is not a change in lawful basis when using the information to train the system.
  • Ensure due diligence checks include the entire information supply chain.
  • Check that if the information was originally collected under consent, the consent statement was clear and granular enough to permit you to use it to train the AI system (and that people are aware that their information will be used in this way).
  • Conduct a fairness and lawfulness assessment as part of the DPIA.
  • Establish comprehensive audit trails to log and monitor access to datasets. This includes tracking who accessed the information, when, and for what purpose.
  • Use role-based access control (RBAC) mechanisms to assign permissions based on staff roles and responsibilities. 
  • Ensure only authorised staff or systems have access to the specific datasets they need. This reduces the opportunity for repurposing the information. 
  • Employ encryption techniques to protect information both in transit and at rest. This ensures that even if unauthorised access occurs, the information remains unintelligible without the appropriate decryption keys. 
  • Apply anonymisation and pseudonymisation techniques to remove personal information in the datasets.

Options to consider:

  • Anonymise information at the earliest opportunity, before you use it to train the AI.
  • Implement data masking to replace sensitive information with fictitious or pseudonymous information during non-production processes. This ensures that the original information is not exposed in testing environments.
  • Integrate watermarking techniques to embed imperceptible markers in the datasets. This can help trace the origin of the information and identify any unauthorised use.
  • Use tokenisation to replace sensitive information with unique tokens. This ensures that the original information is not exposed during processing or analysis and so cannot be repurposed.

 

Control measure: Analysis has been completed to determine if the results of solely automated decision making within AI systems could cause legal or other similar effects on a person and appropriate safeguards have been put in place. In addition, there is explicit consent in place where special category information is used to carry out solely automated decision making within AI systems.

Risk: Not carrying out adequate risk assessments to protect people could cause significant distress and impact on their rights and freedoms and may be in breach of UK GDPR article 22. By not having a person's explicit consent, or being able to demonstrate that processing special category information is in the substantial public interest, the processing will be unlawful. This may breach UK GDPR articles 6, 9, and 22.

Ways to meet our expectations:

  • Assess whether the processing is solely automated.
  • Analyse the impact of decision making on people and detail all potential legal or similar effects.
  • Asses the likelihood that you will accidentally create special category information and use it to make a decision.
  • Ensure that any AI models you use do not unintentionally infer special category or criminal conviction information and use those inferences to make decisions about people. 
  • Delete any special category or criminal conviction information created as a result of automated decision making, if there is no appropriate article 6 lawful basis or article 9 condition. 
  • Ensure people have provided their explicit consent or assess whether the processing is necessary for substantial public interest reasons.
  • Apply additional safeguards to enhance the security of the information if you are using special category information (such as biometric data). 

Options to consider:

  • Seek the views of impacted groups or their representatives when analysing the impacts of automated decision making.
  • Use aggregated or anonymised information when there is no lawful basis to carry out automated decision making.

 

Control measure: There is a policy or process for dealing with people’s rights requests in the information processing pipeline.

Risk: Without a documented process which considers personal information within the processing pipeline and how information rights requests are handled during this time, there is a risk that the UK GDPR would be breached and the rights of people ignored. This may breach UK GDPR articles 12 to 22.

Ways to meet our expectations:

  • Publish guidance for people so they know how to make a request. 
  • Establish a well-organised model management system and deployment pipeline to make it easier and cheaper to accommodate requests.
  • Consider data indexing or tracing and making systems searchable as part of the system design to effectively respond to requests within statutory timeframes.
  • Monitor the time taken to respond to requests in order to identify systems which are potentially more complex.
  • Ensure AI systems have the technical capability to action any requests by people asking you to stop processing their personal information.
  • Ensure AI systems have the technical capability to action any requests by people asking you to erase their personal information from the system permanently.
  • Regularly and proactively evaluate the possibility of personal information being inferred from models in light of the state-of-the-art technology. This minimises the risk of accidental disclosure.

Options to consider:

  • Give people various ways to submit a request.
  • Automate responses within the system.
  • Give people access to the personal information  in their profiles, so they can review and edit it for any accuracy issues themselves.