In February 2020, the ICO published draft guidance on the AI auditing framework, with an initial deadline of 1 April 2020 for comments. Due to the coronavirus pandemic, this deadline was extended until 1 May 2020.
Our survey asked for:
- feedback on how well pitched each section of the guidance was;
- views on the list of controls organisations could use to mitigate some of the risks AI poses to individual rights;
- practical examples that could further help our thinking, and
- provided an opportunity for respondents to make any further general comments.
The ICO would like to thank all those organisations and individuals who took the time to read the draft guidance and give us their views, and those who offered to work with us further. We are especially grateful that even in times of a global pandemic, you made time to engage with us on this guidance. We have carefully noted all your comments, and these have been invaluable in shaping our thinking on this topic as we produced the final version of the guidance.
The ICO has launched its consultation on our draft guidance on the AI auditing framework.
The guidance contains advice on how to understand data protection law in relation to artificial intelligence (AI) and recommendations for organisational and technical measures to mitigate the risks AI poses to individuals. It also provides a solid methodology to audit AI applications and ensure they process personal data fairly.
Aimed at both technology specialists developing AI systems and risk specialists whose organisations use AI systems, this guidance will help in assessing the risks to rights and freedoms that AI can cause; and the appropriate measures you can implement to mitigate them.
The ICO supports innovation and understands the benefits AI can bring as well as the risks. We want to engage, educate and influence those innovating to ensure data protection can be built in to AI systems in practice.
This is the first piece of guidance published by the ICO that has a broad focus on the management of several different risks arising from AI systems as well as governance and accountability measures. It is essential for the guidance to be both conceptually sound and applicable to real life situations as it will shape how the ICO will regulate in this space. This is why feedback from those developing and implementing these systems is essential.
We are seeking feedback from both those with a compliance focus such as:
- data protection officers (DPOs);
- general counsel; and
- risk managers.
As well as technology specialists, including:
- machine learning experts;
- data scientists;
- software developers and engineer; and
- cybersecurity and IT risk managers.
You can respond to this consultation via our online survey or you can download the document and email it to: [email protected].
In March 2019, we launched a call for views about our initial thinking in relation to auditing AI. Since then, our thinking has developed and we have established a more practical approach to the guidance so, if you have already engaged with us please feel free to feedback once again.
Due to recent circumstances the deadline to submit your feedback will be extended.The consultation will now close at 5pm on Friday 1 May 2020.
You can also watch a webinar on Wednesday 26 February at 2pm. During the webinar Dr. Reuben Binns, our Postdoctoral Research Fellow in AI, discussed both the content of guidance in more detail and next steps in the AI auditing framework project.