The ICO exists to empower you through information.

Information Commissioner delivers opening keynote speech at IAPP’s Data Protection Intensive on 28 February 2024.

Hello – thanks for having me here this morning. It's always a pleasure, and a key part of my diary, to speak at and attend various IAPP events around the world. The Data Protection Intensive kicks off my IAPP-related events for the year.

Looking at the agenda for the next two days, I was struck by how AI and technology-focused it was. I suppose that’s to be expected in 2024 – a year in which we have already seen widespread use of biometrics, the continuing rise of AI chatbots and applications and an ever-increasing reliance on Internet of Things devices to help us navigate our daily lives.

Back in December, I warned that 2024 cannot be the year that consumers lose trust in AI. I stand by that statement today.

--

When I started my role as Information Commissioner two years ago, I discovered that my office in Wilmslow had copies of the ICO’s annual reports. They dated back to the ICO’s creation as the office of the Data Protection Registrar in 1984. These reports had the usual common themes – they all mentioned protecting the public, ensuring that we helped organisations understand their obligations under the law, and upholding information rights in the digital age. These themes haven’t changed in the 40 years that the ICO has been around – they're still at the heart of everything we do.

The past informs the future. The biggest questions we were thinking about 40 years ago have led to the biggest questions that we’re tackling today.

To be specific, some of the areas we’re looking at are children’s privacy, our advertising technology work and AI. These are areas where we can see potential harms and public concerns. But they’re also the areas with the most significant potential for public benefit, innovation and growth if used responsibly. Given our limited resources, we want to get the most bang for our buck and focus our efforts where we can make the biggest difference in 2024.

--

That theme of protecting the public feels entirely contemporary when we think about children’s privacy. There’s been a lot of chat in the press recently about whether kids should be on social media before 16 – or if they should even be allowed phones. There are important questions here. Should the social media platforms allow users under the age of 16 to have accounts? Should they have stronger checks and balances, or does that responsibility lie with parents?

These are broad, societal questions, and ones that it is not our place to answer. But it’s the responsibility of us in the room to make decisions on this, where they intersect with data protection. However, we need to bring the public and wider society along with us. It must be a joint and coordinated push towards ensuring kids are safe online.

Anyone familiar with the ICO’s work won’t be surprised to learn that children’s privacy will form a large part of our ongoing work this year. Our Children’s code has been in place for over two years, and we’ve seen a number of really important changes – some of the largest online platforms have improved their default settings, reduced targeted advertising and included parental controls. We’ve been working with the industry through voluntary audits, 1 to 1 engagement on DPIAs and enforcement to help them get it right.

But there’s lots more for us to do. In January, we published an updated Commissioner’s Opinion on age assurance, building on our joint work with Ofcom. We’re also working closely with colleagues there to ensure our priorities remain aligned as the Online Safety Act comes into force. And we’re in the process of developing key areas of interest to persuade influential stakeholders to change their practices and set expectations across the marketplace.

This is not just an ICO project. Regulatory collaboration and cooperation are key. We recently worked with Ofcom on content moderation guidance, providing clarity about where our individual remits sit. We want organisations in scope of the Online Safety Act to know how to comply with data protection law as they carry out content moderation.

In terms of enforcement – our cases against Snap and TikTok are ongoing, and there are several other investigations underway that I can’t yet give details about. We will continue to apply the full range of our regulatory powers to keep children’s personal information safe.

--

In those early ICO reports of the 1980s, no one could have predicted the rise of social networks and video sharing platforms among audiences young and old. We reap the benefits of the internet – but what happens when the internet reaps the benefits of our data? We’ve struggled a bit with getting the AdTech horse back in the stable. But this year we will be prioritising fair use of cookies. We’ve all just clicked “accept all” when a banner pops up, to reduce the delay in getting what we want from a site.

For many, those cookie banners are the most visible manifestation of data protection law. For some, they are a daily reminder of the lack of real choice, and the power imbalance we face when we go online. We’ve found that some websites weren’t giving users a fair choice over whether they were tracked and received personalised advertising. We looked at the top 100 websites in the UK and identified 53 of them as potentially having non-compliant cookie banners. We put them on notice – they had 30 days to make changes or else we threatened enforcement action.

This worked.

Of those 53, 38 organisations have changed their cookies banners to be compliant. Four have committed to reach compliance within the next month. That’s an almost 80% success rate.

Our message is clear – it must be just as easy to reject all non-essential cookies, as it is to accept them. This will reset the balance of power between online advertisers and aggregators, and we users.

Promoting user control and choice is critical to empowering people to confidently share their information to use existing, and future, products and services that drive our digital economy and society. Empowering people, allowing them to make effective and informed choices – this helps build consumer trust.

But in a world of millions of websites, it’s not going to work for us to change one, or even one hundred at a time. We’re going to need to automate this process.

To that end, we are today hosting our first ever hackathon, with internal colleagues and external technical experts, focusing on how we can monitor and regulate cookie compliance at scale. This would allow people to have greater confidence in how their information is captured and used. The aim is to create a prototype of an automated tool which would assess cookie banners on websites and highlight breaches of data protection law.

Our bots are coming for your bots.

--

And so, to the biggest question on my desk – and the most mentioned item on this conference’s agenda – AI. Artificial intelligence has been on the ICO’s radar for a long time. That’s no surprise. AI has transformed, and will continue to transform our lives.

But it also raises questions. How much control are we willing to give the organisations who develop and deploy this technology? What are we giving up in return – information about our health, our routines, our desires? To what extent does the current law apply? Are there gaps, and where are they?

These questions are multifaceted. Generative AI – an emerging area of interest for developers and organisations – also throws up a range of questions.

When is it lawful to scrape data from the internet to train generative AI models? Are people’s rights being meaningfully protected when AI models are built using their information? What do concepts such as purpose limitation and accuracy really mean in this context? What safeguards need to be considered by developers and organisations when exploring models?

While there are lots of questions to consider, what is clear is that any generative AI model must be developed and used in a way which complies with the UK GDPR’s principles.

We’ve been working at pace to provide further clarity on how the law applies to these emergent AI models. We want to ensure people’s rights are protected, while still allowing for innovation. With models being developed and adopted rapidly, our work in this area is becoming urgent.

Given AI is the focus of the conference, I’d like to stay a little longer on this topic.

We've opened a consultation series on generative AI, outlining our initial thoughts and approaches, seeking to provide clarity in a changing technological landscape.

Each chapter of the consultation focuses on a different part of the law. The first chapter looks at the lawful bases used for web scraping to train GenAI models. Our initial thoughts are that the legitimate interests lawful basis may be valid, but only if the model’s developer can ensure they pass the three-part test of purpose, necessity and balancing people’s rights with the rights of the developer.

Earlier this week we published our second chapter in the series, looking at how purpose limitation should be applied at different stages of the AI lifecycle.

In future chapters, we’ll set out our expectations around complying with the accuracy principle and accountability or controllership across the supply chain. If you have thoughts or can offer some insight to help us develop our policy positions, please respond to our consultation – you can find it on our website.

As I’ve previously said, I have made a commitment that we will not miss the boat on AI in the same way policy makers and regulators did with social media. I think it’s important that I share our early thinking with you – through these consultations – on how we are proactively addressing the opportunities and challenges AI presents.

This is not a mission for the ICO alone. We are working closely with the CMA on a joint statement setting out our positions concerning the foundation models that underpin generative AI, which we hope to publish later this year. This statement will explore the intersection between data protection, competition and consumer protection regulation when applied to AI.

We’ll also continue to work with our DRCF partners – the FCA, Ofcom and the CMA – to support the responsible development of AI. We’re piloting an “AI and Digital Hub” service, which will provide tailored regulatory support to innovators. This service will help businesses using cutting-edge technology to bring new products and services to market.

--

Thinking back to those old annual reports, I’m reminded that a regulator’s job is never easy. There’s a balance to be struck between being a champion of innovation, of supporting new technology, and being a protector of people’s rights. While I believe the two can coexist, and aren’t in direct opposition, it sometimes requires careful consideration to balance the scales. Take the use of biometrics as an example. If I asked you what you would consider biometric surveillance, then I’m guessing you would say facial recognition technology. But would you also consider the systematic scraping of photographs from the internet? Or using AI to monitor public transport hubs for ‘suspicious activity’?

This is a wider societal issue. The question over where you draw the line – do you feel comfortable having a Ring doorbell to look out for visitors, but not comfortable when your image is captured by cameras as you’re walking down the street – is one where there is no single answer. In some contexts, it can be difficult for businesses to decide what is lawful.

We’re keen to provide guidance and clarity where possible in these grey areas.

Look at our recent enforcement action against Serco. They were using biometrics to specifically record workers’ attendance, which was directly linked to their pay. They didn’t clearly offer their workers an alternative way to log their entry and exit, which increased the imbalance of power between the employer and the employees. This isn’t a proportionate or necessary use of biometric data. It meant that employees didn’t have a genuine choice when providing their information, and therefore their consent wasn’t valid.

This is just one example. It’s likely there’s many more. Biometrics is an area of developing interest, and organisations need to know how to use this technology in a way that doesn’t interfere with people’s rights.

This is why we have published detailed guidance on biometrics and data protection.

This first phase of the guidance explains how the data protection law applies to cases of biometric recognition. The guidance explores how people’s rights apply to biometric data, as well as providing more context on what appropriate protection of biometric data can involve.

--

I’ll bring it to a close there. As I’ve said today, the key message I want to leave you with is that the ICO is working to protect people’s information, while still championing innovation and privacy-respectful practices. We want to give you certainty – particularly on those big issues that I’ve spoken about today of children’s privacy, advertising technology, AI and biometrics.

Supporting you is the best way we can empower people and organisations.

That hasn’t changed in the previous 40 years – and won’t change for the next 40, or the 40 after that.