Simon McDougall, Deputy Commissioner - Regulatory Innovation and Technology, discusses the relationship between AI and data protection as the ICO publishes new AI guidance.
30 July 2020
Over the past few years, I have witnessed amazing uses of Artificial Intelligence (AI) in areas such as online retail, banking, and healthcare. The recent pandemic has driven innovation in the use of technology and data but some of the challenges for organisations using AI are constant. For example, Is AI the right technology for the problem? What ethical issues does it create? How can we be sure the use of AI is lawful?
AI offers opportunities that could bring marked improvements for society. But shifting the processing of personal data to these complex and sometimes opaque systems comes with inherent risks.
Understanding how to assess compliance with data protection principles can be challenging in the context of AI. From the exacerbated, and sometimes novel, security risks that come from the use of AI systems, to the potential for discrimination and bias in the data. It is hard for technology specialists and compliance experts to navigate their way to compliant and workable AI systems.
It is with those challenges in mind that we have today released our guidance on artificial intelligence as part of our commitment to enable good data protection practice in AI.
The guidance contains recommendations on best practice and technical measures that organisations can use to mitigate those risks caused or exacerbated by the use of this technology. It is reflective of current AI practices and is practically applicable
The guidance is the culmination of two years of research and consultation by Professor Reuben Binns and the ICO AI team. I am deeply grateful to them for their work, and also to the wide range of stakeholders who provided feedback to us throughout.
Technology using AI is characterised by fast moving innovation and evolution and we will continue to evolve our guidance to keep pace with it. We will keep seeking feedback on the guidance to help us to achieve this goal as well as continuing to engage with experts to explore the frontiers of this technology whist also growing our own expertise.
It is my hope this guidance will answer some of the questions I know organisations have about the relationship between AI and data protection, and will act as a roadmap to compliance for those individuals designing, building and implementing AI systems.
Simon McDougall is Deputy Commissioner - Regulatory Innovation and Technology at the ICO where he is developing an approach to addressing new technological and online harms. He is particularly focused on artificial intelligence and data ethics.
He is also responsible for the development of a framework for auditing the use of personal data in machine learning algorithms.