The ICO is seeking views to help shape and improve our AI and data protection risk mitigation and management toolkit.
The toolkit is designed to assist risk practitioners identify and mitigate the data protection risks AI systems create or exacerbate. It will also help developers think about the risks of non-compliance with data protection law.
The toolkit has been designed to reflect the ICO’s internal AI auditing framework and our AI and data protection guidance. It is designed to provide further practical support to organisations auditing the compliance of their own AI systems.
We are looking for views from a wide range of organisations across all sizes and sectors to help make this toolkit as applicable as possible.
We want to hear from people in compliance focused roles, as well as people in more technical roles who are responsible for the design, development and maintenance of AI systems that process personal data.
We are releasing this as an alpha version. A beta version of the toolkit will be published in the summer following initial feedback and further technical development. Beyond that, we will continue to iterate and update, so the toolkit remains relevant and practical.
Accompanying this toolkit is a survey for you to complete. Please send us your comments by 19/04/2021 by filling in this survey and sending it to [email protected].