31 March 2023
Stephen Bonner is the ICO's Deputy Commissioner for Regulatory Supervision. He leads programmes of work to develop strategic ICO positions on technology issues such as data, supervision of the large technology platforms and online harms.
From unlocking our mobile phones to online banking verification, facial recognition technology has become an accepted part of our everyday lives.
But how do we feel about live facial recognition technology, enabling CCTV cameras in public places to identify us while we’re out and about? Is it really necessary to have our faces scanned when we are simply buying some milk and a bag of frozen peas?
Some would say yes: live facial recognition can help the police catch suspects or speed our way through border control checks at ports. But as with many things, where there are benefits there are problems too. Others argue people’s privacy and freedoms are violated, the technology is imperfect and innocent people can be adversely affected.
It is against this backdrop that we have considered the live facial recognition technology provided to the retail sector by security company Facewatch.
Facewatch’s product aims to help businesses protect their customers, staff and stock. The system scans people’s faces in real time as they enter a store and alerts if a “subject of interest” has entered.
Innovative solutions helping businesses prevent crime is in the public interest and a benefit to society. Data protection law recognises this, allowing personal information – in this case facial images - to be used if there is a legitimate interest, such as for the detection and prevention of crime. However, these benefits must always be balanced against the privacy rights of the individual.
Throughout our dealings with Facewatch, we considered whether its product complied with data protection legislation. Whilst we agreed the company had a legitimate interest in using people’s personal data, we identified various areas of concern.
We highlighted these areas of concern and gave Facewatch time to address them. In response to our concerns, Facewatch made, and continues to make, improvements to its product. These include reducing the personal data they collect by focusing on repeat offenders or individuals committing significant offences; improving their procedures by appointing a Data Protection Officer; and protecting those classified as vulnerable by ensuring they do not become a “subject of interest”.
Based on the information provided by Facewatch about improvements already made and the ongoing improvements it is making, we are satisfied the company has a legitimate purpose for using people’s information for the detection and prevention of crime. We’ve therefore concluded that no further regulatory action is required.
Our decision covers the specific aspects of data protection law discussed in this blog, as it applied to Facewatch at a point in time. It is not a blanket approval of Facewatch, nor of LFR use.
Advice to private sector organisations considering LFR use
The closure of our Facewatch investigation does not bring our involvement in this space to an end. Nor does this decision give a green light to the blanket use of this technology. Each new application must be considered on its own merits, balancing the privacy rights of people with the benefits of preventing crime. We will continue to monitor the evolution of live facial recognition technology to ensure its use remains lawful, transparent and proportionate.
This will build on our existing work in this space. In 2019 we published an Opinion on law enforcement use of LFR. This was followed in 2021 with an Opinion on the use of LFR in public places, setting out key requirements for those considering using this technology. These recommendations remain relevant and we expect all organisations to consider them carefully before deploying LFR technology. Should non-compliance come to light we may take enforcement action if appropriate.
Note: this blog was updated on 22 May, with the paragraph beginning “Our decision covers…”