Live Facial Recognition in Scotland: A Threat to Race Equality and Human Rights
As part of the National Conversation on Live Facial Recognition, Lucien Staddon Foster, Research and Policy Officer at CRER, examines why introducing this technology to Scottish policing would have highly detrimental effects on race equality, community-police relations and human rights.
Last week, the Coalition for Racial Equality and Rights (CRER) provided a detailed submission to the Scottish Police Authority regarding the potential use of Live Facial Recognition technology in Scottish policing.
This follows the launch of the ‘National Conversation on Live Facial Recognition’ by Police Scotland, the Scottish Police Authority, and the Scottish Biometrics Commissioner.
Our message was clear: not only is live facial recognition technology deeply flawed and discriminatory, but the very process through which public opinion is being gathered is inadequate, biased and poorly designed.
What is Live Facial Recognition – and why should we worry?
Live facial recognition (LFR) is a form of biometric-enabled technology that scans people’s faces in real-time and attempts to identify them based on metrics like the distance between their eyes and nose, or other physical features that make someone’s face unique.
For years, law enforcement agencies have explored how they can pair this technology with surveillance cameras to scan the faces of the public and compare them to police watchlists. In theory, this would allow them to quickly and precisely identify people of interest in a crowd, helping them locate suspects at large, find missing people, and apprehend those who might pose a threat to public safety.
The use of this technology is often framed as a step toward modernising policing, but its use has raised profound concerns, particularly around human rights, racial inequality and civil liberties.
For instance, campaign groups have suggested that deploying this technology in public spaces means that everyone within range of an LFR-enabled camera would be treated as a potential criminal who must be vetted by the police. This would fundamentally alter the relationship between citizens and the police and directly contradict Police Scotland’s key principle of ‘policing by consent’.
Experts have also raised concerns over who would be added to these police watchlists. While supporters of LFR suggest that watchlists would only be populated by the most sinister of criminals or particularly vulnerable people, the UK’s track record of using LFR paints a very different picture. For instance, police LFR deployments in England and Wales have shown that watchlists have been populated with “protestors not wanted for any offences whatsoever, and people with mental health issues not suspected of any offences,” as well as victims of crime and others thought to pose a ‘risk of harm’ to themselves.
Live facial recognition technology has also been shown to be dangerously inaccurate and highly discriminatory.
A report by the privacy campaigning group Big Brother Watch, analysing operational data from the Metropolitan Police Service and South Wales Police, has shown that nearly 90% of ‘matches’ made by their LFR systems have been false positives. This means that in almost 9 out of 10 cases, the technology had misidentified an entirely innocent person, many of whom will have been unfairly stopped and questioned by the police.
Academic studies and real-world use cases have also shown that LFR systems are prone to significant biases regarding ethnicity and gender. In particular, they are more likely to misidentify women and people from Black and minority ethnic backgrounds, leading to disproportionate numbers of wrongful stops, questioning and arrests.
Some studies have found that LFR software can be ‘100 times more likely to misidentify Asian and Black faces compared to white faces’. Even the latest software, which the police claim has fixed its racial bias, continues to misidentify Black and Asian people.
In practice, using this software would have major consequences for Black and minority ethnic communities, many of which already experience over-policing in their day-to-day lives.
This has been exemplified by Big Brother Watch – which regularly attends and observes LFR deployments from UK police forces – reporting that Black men and young Black boys make up the majority of those being ‘identified’ by the system and stopped by the police.
In fact, the Metropolitan Police Service and facial recognition company FaceWatch are currently facing legal challenge from an anti-knife crime activist named Shaun Thompson, who was falsely identified by their live facial recognition systems, wrongly stopped by police officers, and threatened with arrest if he did not provide his fingerprints to prove that he was not the suspect from the police watchlist.
These outcomes aren’t minor glitches – they reflect fundamental flaws in how AI-based systems reflect the biases and prejudices of our society and how this particular software is built from technology that was primarily designed to work with white people’s faces.
What would the use of this technology look like in Scotland – where African, Caribbean and Black groups are already twice as likely to be stopped and searched by police officers compared to their expected rate?
Due to concerns like these, Scottish Parliament’s Justice Sub-Committee on Policing firmly rejected the use of live facial recognition by the police in 2020, deeming it unjustifiable to invest in technology that is known to be discriminatory and highly inaccurate.
It is, therefore, concerning that just four years later, Police Scotland and the Scottish Police Authority rekindled their efforts to introduce this invasive and harmful technology, and have made no mention of Scottish Parliament’s judgment in this National Conversation on Live Facial Recognition.
A misguided approach to risk and evidence
Among our broader criticisms of the police using LFR technology, CRER strongly believes that the manner in which this ‘National Conversation’ has been conducted has undermined a genuine opportunity for public scrutiny and debate.
At the heart of this lies the framing of the National Conversation, including whose views have been given a platform, how they have been presented, and whether the public has been given a proper opportunity to meaningfully consider the true risks, consequences and alternatives to the use of LFR.
For example, before participating in the National Conversation, organisations were sent and encouraged to read a pack of briefing materials from the Scottish Police Authority. In their words, these materials aim to “provide a summary of the available information on the use of LFR in a policing context”, and the SPA “would be very grateful if you could familiarise yourself with these papers prior to the focus group.”
However, CRER believes that these materials are deeply flawed and, at times, blatantly incorrect and misleading. This is because they often present LFR technology in an overly positive light by minimising or omitting evidence of harm and international criticism.
For example:
Despite directly quoting some critical remarks from Scottish Parliament’s Justice Sub-Committee on Policing, the briefing materials conveniently pass over the Sub-Committee’s conclusion “that there would be no justifiable basis for Police Scotland to invest in technology which is known to have in-built racial and gender bias, and unacceptably high levels of inaccuracy”
Despite highlighting several ‘success stories’ where the use of LFR technology had led to the arrest of criminals who would otherwise have evaded detection and justice on that particular day, the paper does not discuss any of the widely reported scenarios where the use of this technology has had negative consequences. This includes repeated controversies involving the misidentification of women, Black men and school children from Black backgrounds
The paper neglects to highlight the fact that many major technology firms have suspended their own facial recognition programmes due to concerns about racial bias and other ethical issues stemming from its use by law enforcement. This has included technology giants like IBM, Microsoft and Amazon
While the paper acknowledges the algorithmic bias of LFR technology (but suggests this is no longer a problem), the paper does not explore any views from those affected by this bias or what it looks like in practice. If this information were included, it would likely paint a less supportive view of LFR, as polling from London shows that BME respondents are more likely to express discomfort with biometric surveillance, with a third saying they would avoid places where LFR is in use
However, what is perhaps the most glaring example of this is how critical information regarding the poor effectiveness of LFR technology in real-world practice was deeply misrepresented and waved away.
In the Scottish Police Authority’s discussion paper, it was stated that:
Scottish Police Authority (2025). Discussion Paper on the Potential Adoption of Live Facial Recognition by Police Scotland – InternetArchive Snapshot from May 2nd 2025.
While a technology that misidentifies around one in ten people would still be a major cause for concern, a quick look at the original statements they cite shows something quite different, and a whole lot more concerning:
“Metropolitan Police’s facial recognition matches are 98% inaccurate, misidentifying 95 people at last year’s Notting Hill Carnival as criminals.
South Wales Police’s matches are 91% inaccurate.”
This erroneous reporting was flagged to the Scottish Police Authority during CRER’s input to a ‘national conversation’ roundtable, but two weeks later, at the deadline for survey responses, the error remained on the SPA website.
After CRER’s comments, this was corrected in an optional PDF version of the briefing materials, which was quietly re-uploaded on 25th April 2025 with the vague note that “This paper was updated in April 2025 with changes made to references on pages 10 and 11 of the paper.”
The website version of the paper was also eventually corrected on 6th May 2025, though these edits came after the deadline for survey responses had passed.
This means that many of those responding to the Scottish Police Authority’s survey would not have known that the materials shared with them contained highly inaccurate information, nor that LFR technology would be similarly inaccurate in its real-world use. If the public is only told one side of a story, or an entirely inaccurate one, how can they be expected to make an informed judgement?
Given that these “references on pages 10 and 11” distorted key evidence that could easily shape someone’s opinion on LFR technology, it's concerning that no communications were issued about this. There was no official withdrawal of the original paper, and no communications were sent to participants to raise awareness of the error and explain why the discussion paper was republished online.
Does this demonstrate proper accountability and a genuine effort to make amends for what was an unfortunately significant, but probably entirely innocent, mistake? Well, that’s up for public debate, but it’s a matter that should be taken incredibly seriously.
So, what happens next?
At the time of writing this, we are still halfway through the National Conversation on Live Facial Recognition. While the Scottish Police Authority’s view-gathering survey has come to a close, a conference is still scheduled to take place in Edinburgh, where people and organisations can provide further input on the subject.
It’s been repeatedly stated that this National Conversation is simply a scoping exercise, from which Police Scotland will decide whether it should explore the introduction of LFR any further. But given the amount of time and resources that have clearly been invested in exploring the introduction of LFR since 2024, it is becoming increasingly concerning that we may see this deployed in Scotland sometime soon.
But among all this uncertainty, there are some things that we know to be true:
Live facial recognition technology is STILL discriminatory and racially biased, despite supporters’ efforts to downplay these concerns. This is a fundamental flaw with AI-based systems that cannot be meaningfully addressed through surface-level software tweaks
The implementation of LFR technology by Police Scotland would disproportionately interfere with people’s human rights and privacy, collecting people’s biometric information at an unprecedented scale
The technology is simply not good enough to be reliable in real-world use. There are significant gaps between what the police think this technology can do and what it actually looks like in practice
There is no specific legislation or statutory duty in Scots law to safeguard citizens against the harmful use of LFR technology, leading to a significant accountability gap in how it would be regulated
As public bodies, Police Scotland and the Scottish Police Authority must pay due regard to eliminating discrimination, fostering good relations and promoting equality of opportunity. It is unlikely that the use of racist LFR technology would be compliant with these legal duties, nor their commitments to becoming ‘an anti-racist service’
Facial recognition technology may offer the illusion of security, but its real-world application tells a different story – one of racial injustice, creeping surveillance, and democratic erosion.
Let’s ensure that Scotland remains a place where rights are protected, Black and minority ethnic groups are respected, and justice is not sacrificed in the name of convenience.
Say no to live facial recognition in Scottish policing.
To learn more about why the Coalition for Racial Equality and Rights disagrees with the police use of live facial recognition technology in Scotland, please see our written submission to the Scottish Police Authority’s CitizenSpace survey.