The UK’s independent authority set up to uphold information rights in the public interest, promoting openness by public bodies and data privacy for individuals.

Applications of artificial intelligence (AI) increasingly permeate many aspects of our lives. We understand the distinct benefits that AI can bring, but also the risks it can pose to the rights and freedoms of individuals.

This is why we have developed a framework for auditing AI, focusing on best practices for data protection compliance – whether you design your own AI system, or implement one from a third party. It provides a clear methodology to audit AI applications and ensure they process personal data fairly. It comprises:

  • auditing tools and procedures that we will use in audits and investigations;
  • this detailed guidance on AI and data protection; and
  • a toolkit designed to provide further practical support to organisations auditing the compliance of their own AI systems (forthcoming).

This guidance is aimed at two audiences:

  • those with a compliance focus, such as data protection officers (DPOs), general counsel, risk managers, senior management, and the ICO's own auditors; and
  • technology specialists, including machine learning experts, data scientists, software developers and engineers, and cybersecurity and IT risk managers.

The guidance clarifies how you can assess the risks to rights and freedoms that AI can pose from a data protection perspective; and the appropriate measures you can implement to mitigate them.

While data protection and ‘AI ethics’ overlap, this guidance does not provide generic ethical or design principles for your use of AI. It corresponds to data protection principles, and is structured as follows:

  • part one addresses accountability and governance in AI, including data protection impact assessments (DPIAs);
  • part two covers fair, lawful and transparent processing, including lawful bases, assessing and improving AI system performance, and mitigating potential discrimination;
  • part three addresses data minimisation and security; and
  • part four covers compliance with individual rights, including rights related to automated decision-making.

The accountability principle makes you responsible for complying with data protection and for demonstrating that compliance in any AI system. In an AI context, accountability requires you to:

  • be responsible for the compliance of your system;
  • assess and mitigate its risks; and
  • document and demonstrate how your system is compliant and justify the choices you have made.

You should consider these issues as part of your DPIA for any system you intend to use. You should note that, in the majority of cases, you are legally required to complete a DPIA if you use AI systems that process personal data. DPIAs offer you an opportunity to consider how and why you are using AI systems to process personal data and what the potential risks could be.

You also need to take care to identify and understand controller / processor relationships. This is due to the complexity and mutual dependency of the various kinds of processing typically involved in AI supply chains.

As part of striking the required balance between the right to data protection and other fundamental rights in the context of your AI systems, you will inevitably have to consider a range of competing considerations and interests. During the design stage, you need to identify and assess what these may be. You should then determine how you can manage them in the context of the purposes of your processing and the risks it poses to the rights and freedoms of individuals. You should however note that if your AI system processes personal data you always have to comply with the fundamental data protection principles, and cannot 'trade' this requirement away.

When you use AI to process personal data, you must ensure that it is lawful, fair and transparent. Compliance with these principles may be challenging in an AI context.

AI systems can exacerbate known security risks and make them more difficult to manage. They also present challenges for compliance with the data minimisation principle.

Two security risks that AI can increase are the potential for:

  • loss or misuse of the large amounts of personal data often required to train AI systems; and
  • software vulnerabilities to be introduced as a result of the introduction of new AI-related code and infrastructure.

By default, the standard practices for developing and deploying AI involve processing large amounts of data. There is a risk that this fails to comply with the data minimisation principle. A number of techniques exist which help both data minimisation and effective AI development and deployment.

The way AI systems are developed and deployed means that personal data is often managed and processed in unusual ways. This may make it harder to understand when and how individual rights apply to this data, and more challenging to implement effective mechanisms for individuals to exercise those rights.