This section is aimed at senior management and those in compliance-focused roles, including DPOs, who are accountable for the governance and data protection risk management of an AI system. You may require a technical specialist to explain some of the details covered in this section.
Control measure: Appropriate and timely privacy information is provided to people to ensure they are sufficiently informed how their personal information is processed by AI systems.
Risk: If the processing is not clear to people, they cannot make informed decisions about their information and understand how the AI system affects them. Without defining what the AI system is used for and how personal information is used in it, there is a risk that incompatible processing takes place. If AI is in use and that is not communicated by privacy information, this may breach UK GDPR articles 5(1)a, 12-15 and 22.
Ways to meet our expectations:
- Provide information about AI systems that is clear and meaningful, explains the processing in everyday language, and does not use overly complex explanations of the AI or confusing and technical terminology, jargon or legalistic language.
- Avoid hiding information within lengthy terms and conditions.
- Be proactive when making people aware of the information and provide an easy way to access it.
- Use a combination of appropriate techniques, such as a layered approach, dashboards, just-in-time notices, icons, and mobile and smart device functionalities.
- Assess whether privacy information is easy to understand (eg by assessing how well people understand processing, or conducting user testing to gain feedback on the content).
- Provide evidence that the underlying processing is within the reasonable expectations of people and does not have an unjustified adverse effect on them.
- If decisions are made by AI without human intervention, provide extra details on:
- the logic of decision making involved;
- the significance of the processing; and
- the envisaged consequences of it.
- Inform people within one month when AI creates new information about them (which you intend to keep and use).
- Use explainable AI techniques to understand and interpret the decision-making processes of machine learning models. This enhances transparency and helps identify which features contribute significantly to the model's predictions.
- Include details to enable people to challenge the outcome if they think it is flawed (eg if some of the input information was incorrect or irrelevant, or additional information wasn’t taken into account that the person thinks is relevant).
- Take steps to explain any trade-offs to people or any human tasked with reviewing AI output.
- Provide privacy information if you collect special category information in order to monitor and prevent discrimination or unfair bias. Include the processing purpose and lawful basis and condition in the privacy information.
- Regularly review existing AI privacy information and, where necessary, update it appropriately.
- Update privacy information and communicate the changes to people before starting any new processing, particularly if there are plans to use personal information for a new purpose within AI processing.
- Train staff so they understand what privacy information to provide to people and how best to do so.
Options to consider:
- Provide different explanations tailored to the intended audience that take into account the level of knowledge that the they have about the subject and are clear for them to understand.
- Research sector-specific expectations and examples.