The ICO has launched its consultation on our draft guidance on the AI auditing framework.
The guidance contains advice on how to understand data protection law in relation to artificial intelligence (AI) and recommendations for organisational and technical measures to mitigate the risks AI poses to individuals. It also provides a solid methodology to audit AI applications and ensure they process personal data fairly.
Aimed at both technology specialists developing AI systems and risk specialists whose organisations use AI systems, this guidance will help in assessing the risks to rights and freedoms that AI can cause; and the appropriate measures you can implement to mitigate them.
The ICO supports innovation and understands the benefits AI can bring as well as the risks. We want to engage, educate and influence those innovating to ensure data protection can be built in to AI systems in practice.
This is the first piece of guidance published by the ICO that has a broad focus on the management of several different risks arising from AI systems as well as governance and accountability measures. It is essential for the guidance to be both conceptually sound and applicable to real life situations as it will shape how the ICO will regulate in this space. This is why feedback from those developing and implementing these systems is essential.
We are seeking feedback from both those with a compliance focus such as:
- data protection officers (DPOs);
- general counsel; and
- risk managers.
As well as technology specialists, including:
- machine learning experts;
- data scientists;
- software developers and engineer; and
- cybersecurity and IT risk managers.