The ICO exists to empower you through information.

This section is aimed at senior management and those in compliance-focused roles, including DPOs, who are accountable for the governance and data protection risk management of an AI system. You may require a technical specialist to explain some of the details covered in this section. 

Control measure: Appropriate and timely privacy information is provided to people to ensure they are sufficiently informed how their personal information is processed by AI systems. 

Risk: If the processing is not clear to people, they cannot make informed decisions about their information and understand how the AI system affects them. Without defining what the AI system is used for and how personal information is used in it, there is a risk that incompatible processing takes place. If AI is in use and that is not communicated by privacy information, this may breach UK GDPR articles 5(1)a, 12-15 and 22.

Ways to meet our expectations:

  • Provide information about AI systems that is clear and meaningful, explains the processing in everyday language, and does not use overly complex explanations of the AI or confusing and technical terminology, jargon or legalistic language. 
  • Avoid hiding information within lengthy terms and conditions.
  • Be proactive when making people aware of the information and provide an easy way to access it. 
  • Use a combination of appropriate techniques, such as a layered approach, dashboards, just-in-time notices, icons and mobile and smart device functionalities.
  • Assess whether privacy information is easy to understand (eg by assessing how well people understand processing, or conducting user testing to gain feedback on the content). 
  • Provide evidence that the underlying processing is within the reasonable expectations of people and does not have an unjustified adverse effect on them.
  • If decisions are made by AI without human intervention, provide extra details on: 
    • the logic of decision making involved; 
    • the significance of the processing; and 
    • the envisaged consequences of it. 
  • Inform people within one month when AI creates new information about them (which you intend to keep and use).
  • Use explainable AI techniques to understand and interpret the decision-making processes of machine learning models. This enhances transparency and helps identify which features contribute significantly to the model's predictions.
  • Include details to enable people to challenge the outcome if they think it is flawed (eg if some of the input information was incorrect or irrelevant, or additional information wasn’t taken into account that the person thinks is relevant).
  • Take steps to explain any trade-offs to people or any human tasked with reviewing AI output.
  • Provide privacy information if you collect special category information in order to monitor and prevent discrimination or unfair bias. Include the processing purpose and lawful basis and condition in the privacy information.
  • Regularly review existing AI privacy information and, where necessary, update it appropriately.
  • Update privacy information and communicate the changes to people before starting any new processing, particularly if there are plans to use personal information for a new purpose within AI processing.
  • Train staff so they understand what privacy information to provide to people and how best to do so.

Options to consider:

  • Provide different explanations tailored to the intended audience that take into account the level of knowledge that the they have about the subject and are clear for them to understand.
  • Research sector-specific expectations and examples.

Useful links

ICO Guidance on AI and data protection: How do we ensure transparency in AI? | ICO

ICO guidance: Explaining decisions made with AI | ICO

 

Control measure: Where personal information is obtained from or processed by other sources, all necessary parties can demonstrate compliance with the transparency requirements set out under UK GDPR articles 13 and 14 (unless a relevant exemption applies). 

Due diligence checks are completed by all parties to provide assurance that at each stage of the supply chain, people have been informed how their information will be used and that it will be passed throughout the chain.

Risk: If appropriate and timely privacy information has not been provided to people before commencing processing their information in AI systems, there may be a breach of UK GDPR articles 13 and 14.

Ways to meet our expectations:

  • Inform people if the purpose for using their personal information is different to what it was originally obtained for.
  • Include meaningful information about the logic involved, as well as the significance and the envisaged consequences of the processing.
  • Provide privacy information within a reasonable period of obtaining the information, and no later than one month.
  • Check and confirm what privacy information is provided across the supply chain and that it informs people about their information being shared with others. 
  • Be very clear with people about any unexpected or intrusive uses of their personal information, such as combining information about them from a number of different sources.
  • Carry out a DPIA to determine whether providing privacy information would involve a disproportionate effort when balanced against people’s rights and freedoms.

Options to consider:

  • Confirm that privacy information has been provided by (or on behalf of) any sub-processors or subcontracted business process outsourcing organisations (BPOs) (eg BPOs conducting human review checks on system accuracy outputs).
  • Update your privacy information and actively communicate this to people as your processing purposes become clearer, if the purposes for processing are unclear at the outset.