The ICO exists to empower you through information.

08 March 2023

For International Women’s Day, Sophia Ignatidou, Group Manager for AI and Data Science, discusses how bias can arise in AI, the importance of addressing AI-driven discrimination and how we can all work towards equity in these systems. Her blog also appears on the International Women’s Day website.

As a woman who also became an immigrant, the concepts of equity and inclusion have always been close to my heart. My career began as a journalist, working for newspapers across both Greece and the UK. I wanted to have a more meaningful impact on the world and in the hope that a career change would enable this, I decided to study international relations and diplomacy.

After immersing myself in the world of Artificial Intelligence (AI) as part of a leadership fellowship at Chatham House, I am now working as a Group Manager for AI and Data Science at the Information Commissioner’s Office (ICO). In an industry where women are often underrepresented, I am both excited and proud to be leading a team consisting primarily of women working in tech and innovation. It is so important for organisations, especially those involved in shaping policy, to champion diversity and inclusion in the workplace.

From chatbots in classrooms to artwork by algorithm, AI has quickly established itself in our everyday lives. As the usage of these technologies grows, it has never been more important to turn our attention to tackling unfairness in these systems and the impact they have on the world. While the academic community has been flagging issues about the discriminatory effects of AI for some time now, these are now a regular feature in both the news headlines and on the political agenda.

There are numerous examples of algorithms reinforcing gender stereotypes – whether that’s objectifying images of female bodies, or disproportionally rejecting female applicants from a particular job. However, it is important to understand both the scale and the nuances of this issue.

AI is trained and tested by existing data so its learnings will often reflect bias that is already present. We cannot expect these systems to create an automated space without any human error – after all, humans are the ones building them.

Discrimination, whether that’s towards gender, ethnicity or other personal data, can often be traced back to the original data used to train a model. For example, a lack of accurate data highlighting the needs, interests and experience of women will likely be reflected in how the AI performs.

If the data used to train the AI does not fairly represent the demographics of our society, the AI system will not be able to produce appropriate results, increasing the risk of unfair outcomes. The training data, or the way it has been labelled, may also reflect an existing stereotype, leading the AI system to reproduce the same patterns of discrimination – particularly in fields and occupations where this has historically been a problem.

Interestingly, an AI system can also be used to identify unfair discrimination – it can be a powerful tool for uncovering hidden patterns in data, both positive and negative, and forcing us to confront these inequalities.

If AI-driven discrimination is left unaddressed, we could end up shutting out the very people who are best placed to challenge it. In order to create technology that recognises and addresses discrimination, there needs to be a focused effort across the AI industry to tackle this problem - from researchers to developers, engineers to data scientists.

Those working with AI should take care to determine and document their approach to addressing unfair discrimination from the start – that way, safeguards can be put into place to try and prevent unwanted bias from rearing its head further down the line

By now, the problem of biased training data has been widely documented, but it is important to note that unfair outcomes are not limited to the training stage of AI. We need to champion equity in every aspect of the AI lifecycle to ensure that future models are as fair as possible and comply with regulations.

At the ICO, we’ve been working on AI and its surrounding issues for a while now, striving to empower and inform individuals about their rights but also help organisations use personal data responsibly and confidently.

In my team, we are all aware that gender-based discrimination is just one of various potential risks that AI poses. Using data protection legislation and its principles, we are working hard to raise awareness amongst AI developers about the tools at their disposal to mitigate unfair discrimination.

Tackling AI-driven discrimination is one of the key priorities that we set out in our three-year strategic plan ICO25. As well as investigating concerns about the potential risks posed by the technology, we have issued guidance and practical toolkits to educate AI developers on ensuring their algorithms treat people and their information fairly.

Naturally, the ICO’s work in this space is expanding. With the huge amounts of personal data involved in training AI, it is important that we continue to encourage transparency and best practice. Last year, we hosted workshops with the Alan Turing Institute and the ICO will soon be publishing an update on fairness in AI as part of our ongoing improvement of our existing guidance on AI and Data Protection. Everything from that guidance to our research programme can be found on the ICO website.