The ICO exists to empower you through information.

Information Commissioner John Edwards’ keynote speech at TechUK’s Digital Ethics Summit 2023, delivered on 6 December 2023.

Check against delivery

Kia ora, good morning and welcome to TechUK’s annual Digital Ethics summit. I’m happy to be here talking to you all, a roomful of people who care as much as I do about protecting people’s fundamental privacy rights.

Merriam-Webster's word of the year for 2023 was revealed a couple of weeks ago. It’s “authentic”. In the age of AI, deepfakes and ChatGPT, it’s an interesting choice.

If we also think about the theme of this year’s summit, which is “seizing the moment”, I can see a neat link between the two. It’s important that organisations remain “authentic” if they are to use AI or other emerging technologies to “seize the moment”. If an organisation isn’t authentic, or if the public think that they can’t trust the organisation with their information, then they’ll go elsewhere. Obviously, seizing the moment means many different things in many different circumstances. It also presents an interesting question – how can we, as industry leaders, regulators and organisations, seize the moment? Is it through looking to the future, seeing what’s coming down the track, and preparing for that? Or is it, as I’m about to argue, about realising and acknowledging that the future we were considering is actually our present, and that there’s work to do to ensure the future remains bright. More on that later.

The main technological advancement I want to discuss today is artificial intelligence. It’s ubiquitous. How many of you in the room used AI this morning without even thinking about it? It’s in your phones, on your smart watches, it’s in how you travel, even in your music choices on your commute. It’s an essential part of your morning routine. People have been talking and thinking about the ramifications of things like smart devices for a while now, but they have never been so entrenched in modern society.

For example, I’m sure you’ve been thinking about how much personal information these devices are actually collecting, even from something as simple as finding your way to this conference today.

If you used a navigation app to get here, its AI model knows something about you – it knows your home address, where you are now, that your home is most likely empty. If a hacker were to gain access to that information, it would make it easy for them to know when people were out at work. It’s the same with wearable devices – if someone gained access to people’s smart watches, they would have access to location data.

I said before that people already knew about the risks of smart devices. However, I think we’ve moved from a tacit understanding of the risks to cyber-attacks and data scraping becoming a very real possibility. AI brings in new challenges. For example, the CMA recently published an initial review of foundation models. Their review found that the lack of access to closed-source AI models, like that which was used to build ChatGPT, could drive up competition and result in higher prices and unfair terms of use. The CMA also highlighted that intellectual property and copyright were important issues in the use of AI models. Some have raised concerns after their material was used, uncredited, in the building of AI models. The report calls for developers to have access to data and computing power to ensure that early AI developers do not gain and unfair and entrenched advantage over others. We’re working with the CMA on this issue as there are a number of areas of cross-over between competition, consumer and data protection objectives when it comes to AI.

But of course, there is a danger here. The Guardian recently ran a story about OpenAI, saying that their staff were alarmed by how powerful their AI model was, with some raising safety concerns with the board. One example used by an OpenAI employee about open-source AI was if an AI system learned how to start its own biological laboratory. The results of that could be catastrophic. Another example is that an open-source AI model could be used to start a disinformation campaign to disrupt upcoming elections.

I want to make it clear from the very start that we are not against organisations using AI. We just want to ensure that they are using AI in sensible, privacy-respectful ways, ensuring that people’s personal information and privacy rights remain protected throughout.

People are excited about the benefits that AI brings. Research by the Office of National Statistics and KPMG have shown an increased knowledge and awareness of AI.

But the research isn’t all positive. Pew Research reported in August that people are becoming less trusting of AI. Their research found that 52% of those surveyed were more concerned than excited about AI. This is an increase from 37% in 2021 and 38% last year. Although their survey relates to American citizens, the results are still surprising to say the least.

If people don’t trust AI, then they’re less likely to use it, resulting in reduced benefits and less growth or innovation in society as a whole. This needs addressing – 2024 cannot be the year that consumers lose trust in AI.

This isn’t a surprise to us. It fits in with what we’ve been seeing all year, and links in with our investigations work in this area. However, we want to ensure that people don’t stop using apps or technology because they’re worried about the risks to their data. That’s what we’re here for – to ensure that people’s information is protected, to anticipate their concerns and to take action before their information is compromised. For example, we issued a preliminary enforcement notice against Snap Inc because we were concerned over their potential failure to properly assess the privacy risks posed by its generative AI chatbot My AI. This was particularly important as it involved the personal information of children. Our investigation is ongoing, and we haven’t reached any firm conclusions yet, but what our investigation does is offer a signal to the rest of the industry. It ensures that others sit up, take notice and think about their compliance. This shows that we’re taking action in this space. I have been clear that organisations must consider the risks associated with AI, alongside the benefits. The tech may be new, but the same data rules apply here as they always have done.

Last year, we issued a £7.5m fine to facial recognition database company Clearview, for collecting and storing images of UK residents. We’re seeking permission to appeal the judgement of the Tribunal which ruled that Clearview fell outside of the reach of UK data protection law, but our action in this space demonstrates how AI models, like the database that Clearview manages, can be used in ways that the public may not expect or agree to. That’s wrong, and that’s where we can intervene and provide clarification and reassurance for the UK public.

However, AI offers many benefits, such as automating processes that would otherwise be time or labour-intensive. For example, at the ICO we are planning to introduce an AI solution to help us assess and highlight examples of websites using cookie banners which breach data protection laws. We are aiming to run a hackathon event in early 2024 to explore this problem and proposed solutions in more detail.

We’ve also been clear about where we’ve investigated and haven’t found any concerns. At the start of this year, we highlighted how concerns were raised with us about the use of algorithms in decision-making around benefit entitlement and the welfare system by local authorities and the Department for Work and Pensions. We looked under the hood, explored and investigated how AI was being used in these decisions and whether there was any sign of discrimination or negative impact.

Our investigation did not find any evidence to suggest that AI was subjecting people to harm or any financial detriment by being used in the welfare and social care sector. The local authorities we spoke to confirmed that there was significant human involvement before any final decisions were made on benefit entitlement.

We shared the results of our investigation because I believe it’s important to show to both the public and other organisations that AI can be used in positive ways. It provides a sense of certainty and helps build trust that AI, when used with people’s privacy rights in mind, can improve processes. These kinds of findings don’t make headlines, but they’re vitally important – they show the balance of our work and allow us to highlight examples of good practice as well as bad.

The benefits that AI can bring are plain to see. As I mentioned at the start of this speech, it can help improve our lives by automating some of our most mundane processes – picking the quickest route to work, providing the perfect soundtrack for your day or taking the guesswork out of what to watch on TV. AI also means good things for industry – whether that’s through new innovations to improve customer service, better safety features for online services or quicker resolutions for common technical issues.

But these benefits can’t be allowed to be outweighed or overshadowed by public concern.

By virtue of you attending this summit, engaging with TechUK and listening to me talk, I believe I’m safe to assume that all of you in this room today understand and appreciate both the benefits and dangers of AI. I believe I can also assume that you understand and appreciate that our existing regulatory framework allows for firm and robust regulatory intervention as well as innovation. Please feel free to correct me if my assumptions are wrong!

However, the hard work that you are all putting in, by adhering to the law, taking the time to understand how to protect people’s information, learning how to embed privacy by design and not as an afterthought – that hard work risks being undermined. Those who are seeking to use AI for nefarious purposes, to misuse the advances in technology to either harvest data or treat their customers unfairly. We know there are bad actors out there who aren’t respecting people’s information and who are using AI to gain an unfair advantage over their competitors. Our message to those organisations is clear – non-compliance with data protection will not be profitable. Persistent misuse of customers’ information, or misuse of AI in these situations, in order to gain a commercial advantage will be punished. Where appropriate, we will seek to impose fines commensurate with the ill-gotten gains achieved through non-compliance. But fines are not the only tool in our toolbox. We can order companies to stop processing information and delete everything they have gathered, like we did with Clearview AI.

We’re also interested in getting into the weeds of the AI supply chain. We want to understand how the AI models work – what information are they trained on? Does this introduce any unwanted bias or discrimination against minority groups or those who aren’t represented as widely in society? And how can we, as the data protection regulator, ensure that these biases aren’t carried over and incorporated into later AI models?

It’s important that our voice is heard in these debates, particularly as large language models such as ChatGPT grow in popularity. My colleague Stephen Almond, who you’ll be hearing from later today, spoke recently to the House of Lords Communications and Digital Committee about LLMs and more broadly about the AI value chain. Stephen highlighted our position in the AI space and how we’ve been regulating here for a long time – as I’ve laid out here this morning, this isn’t a new issue for us. There is no regulatory lacuna here, the same rules apply as they always have done. Stephen also mentioned that, as AI model developers and deployers both work with personal data, we are able to act across the entirety of the AI value chain. That creates choices for us about where to have the greatest impact. He also talked about our ability to tackle risks upstream, about bias, that would flow downstream into the deployment of the AI models.

Moving on to how the ICO can help you. I want to highlight just some of the ways that we’re helping organisations who want to do the right thing understand their responsibilities when it comes to implementing or using AI.

Back in 2020, we worked closely with members of the public and the Alan Turing Institute to discover how people actually wanted to receive information about decisions that were made using AI. This research was important, as it gave us a real insight into what people thought about AI in general. The citizen juries and focus groups that we attended helped inform our co-badged guidance with the Turing Institute, giving organisations a blueprint on how to explain decisions made by AI to their customers or clients. We also published guidance looking at the intersection and cross-regulatory positions of AI and data protection – where they crossed paths, where our jurisdiction ended, and which other regulators were involved. This set the tone of regulatory cooperation on AI which has blossomed into a full workplan with our partners at the DRCF – I’ll cover that in more detail later.

We haven’t just written guidance, though. We know that some of you will have specific, detailed and technical questions around your use of AI and how it interacts with data protection. That’s where our Sandbox and Innovation Advice service comes in.

Through the Sandbox, we’ve created a “safe space” for you to come in and work together with ICO experts on their innovative product or service. Our staff are on hand to help you work your way through any tricky data protection-related issues you may come up against. The Sandbox also offers you a way to stress-test their product or service before releasing it to the wider market. This way, we can help you to iron out any potential problems before they occur, resulting in a smoother process for you and your clients or customers.

Applications for the Sandbox are open until the end of December – I urge anyone in the room right now, who’s working with AI and wants to ensure that they’re baking in data protection from the very start, to get in touch with the Sandbox team and to submit an application. We’d be happy to help.

We work closely with our partners in the DRCF – the Digital Regulation Cooperation Forum – to ensure a joined-up approach to innovation. We founded this cross-regulatory group to ensure that there was a cohesive and collaborative approach to issues that affect society. This collaborative approach is fundamental to our effectiveness as a whole-economy regulator. Our DRCF partners – Ofcom, the Financial Conduct Authority and the Competition and Markets Authority – have worked with us to produce guidelines, share knowledge and consider cross-regulatory issues affecting people across the UK, which includes AI. Through the DRCF, we are piloting a multi-agency advice service, helping innovators to develop their ideas with regulatory compliance in mind. This “AI and Digital Hub” will provide tailored advice to help businesses navigate the process and remain compliant. We are aiming to launch the pilot service in 2024 so keep an eye out for that if you’re interested.

I also want to talk about our award-winning Innovation Advice Service, which we launched earlier this year. Innovation Advice is a fast, direct service, offering advice and guidance to organisations struggling with data protection issues which may be holding up the progress of your new product, service or business model. We’ll answer your question within 10-15 working days, giving you clarity and confidence on the data protection aspects of your project. Since its beta launch in April, we’ve answered questions on how a law firm can deploy a generative AI system to draft responses to clients, which lawful basis applies to collecting and storing employee diversity data and whether telling customers about an online fraud prevention tool would be classed as direct marketing. An eclectic bunch!

This is more than just simple promotion of ICO services. The point of highlighting the help that is available from the regulator is so that everyone in this room can go and spread the word – the ICO is here to help. We are an important and influential leader in the AI space and we want people to come to us for advice, help and guidance if they need it.

However, we also want to make our expectations clear. Privacy and AI go hand in hand – there is no either/or here. You cannot expect to utilise AI in your products or services without considering data protection and how you will safeguard people’s rights. There are no excuses for not ensuring that people’s personal information is protected if you are using AI systems, products or services.

We can help you here, as I’ve laid out this morning. The data protection principles provide a robust, risk-based, context-specific framework for the governance of AI and emerging tech – it means that organisations deploying these systems can consider the potential risks and harms to people. However, there are also some things that you can do outside of engaging with us. For example, we’d encourage you to reach out to your prospective users – where do they see risks? What do they want to see more transparency on? What would help them to make an informed decision on your product or service? You can then take this feedback and build it into your DPIA or broader risk assessment process.

Of course, it’s likely that the ICO is not the only regulator that can help you with this. We’ve worked closely with our DRCF partners on issues like AI that are cross-cutting and cross-regulatory. For example, we’ve strengthened our relationship with Ofcom to ensure that, when the Online Safety Act comes into force, we achieve maximum alignment and consistency between our individual regimes. Where the development of AI systems and their use of personal information overlaps with competition law, we’ve liaised with our counterparts at the CMA to ensure a joined-up, no surprises approach to regulation.

We’ve also worked with the DRCF to anticipate and consider our approach to emerging technologies, like AI, so that we are fully prepared for what is coming down the track. In particular, what will 2024 look like in terms of AI and emerging tech? This links in nicely with some of the later sessions today, including the meet the regulators panel, which my colleague Stephen Almond is taking part in.

I’ll draw to a close there as I know we’ve got some questions to come – but I want to end with a clear message to industry. We are here to help. If you have concerns or questions about using AI – whether that’s in the development of a new product or service, or if you’re looking to improve your current offering – then come to us, visit our website or call our helpline. You can find all of our AI-related content on our website at ico.org.uk/AI - there’s guidance, toolkits and much more, so please don’t hesitate to use what’s available to you. We’re also working on some blogs over the next few months, setting out some of our thinking on aspects of AI – we're keen to hear your thoughts on these, so please get in touch once they’re published.

My closing message is about trust – make sure that 2024 isn’t the year that people lose trust in AI. As I mentioned at the start, people are thinking about authenticity – it's the word of 2023. It’s important, as we move into a world where AI plays a significant part in our lives, that we stay authentic. And you can do that by ensuring that people’s rights are protected from the very start. Thanks for listening, and I’m happy to take questions.