The Metropolitan Police assert that their application of live facial recognition is devoid of bias, as echoed by a prominent technology specialist, but this claim has not been substantiated by the reports they reference in support of their litigation.
The MET plans to deploy the LFR in its most notable event this bank holiday weekend at the Notting Hill Carnival in West London.
According to The Guardian, the technology will be utilized at two locations leading up to the carnival, and the military has insisted on its implementation, despite the fact that LFR use is considered illegal, as declared by the Equality and Human Rights Commission.
This new assertion comes from Professor Pete Hussy, who led the only independent academic review of the police’s use of facial recognition; he is a former reviewer of Met’s LFR since 2018-19 and currently advises various law enforcement agencies in the UK and internationally on its application.
The Met contends that it has reformed the usage of LFR, as indicated in the 2023 research commissioned by the National Institute of Physics (NPL), claiming that it is now virtually free from bias. Nevertheless, Fussey responded:
“The sensitivity of the system can be adjusted for LFR’s operation. Higher sensitivity results in detecting more individuals, but such potential bias is influenced by race, gender, and age. Setting zero is the most sensitive while one is the least.”
The NPL report identified bias at a sensitivity level of 0.56, noting seven instances where individuals tested were mistakenly flagged as suspects, all of whom were from ethnic minority backgrounds.
These findings stemmed from a collection of 178,000 images entered into the system, with 400 volunteers passing by the cameras roughly 10 times, providing 4,000 opportunities for accurate recognition. They were included in an estimated crowd of over 130,000 at four locations in London and one in Cardiff. The tests were carried out in clear weather over 34.5 hours, though Fussey remarked this was shorter than tests conducted in some other countries where LFR is valued.
From this dataset, the report concluded that no statistically significant bias existed in settings above 0.6. This assertion has been reiterated by the MET to justify their ongoing use and expansion of LFR.
Hussey criticized this as insufficient to substantiate the MET’s claims, stating: “Councillors at the Metropolitan Police Service consistently argue their systems undergo independent testing for bias. An examination of this study revealed that the data was inadequate to support the claims made.”
“The definitive conclusions publicly proclaimed by MET rely on an analysis of merely seven false matches from a system scrutinizing the faces of millions of Londoners. Drawing broad conclusions from such a limited sample is statistically weak.”
Currently, the MET operates LFR at a sensitivity setting of 0.64, though they assert that the NPL studies did not yield erroneous matches.
Fussey stated: “Their own research indicates that false matches are not evaluated in settings claiming no bias that exceed 0.64.”
“Few in the scientific community suggest sufficient evidence exists to support these claims drawn from such a limited sample.”
Fussey added: “We clearly indicate that bias exists within the algorithm, but we assert that this can be mitigated through appropriate adjustments to the system settings. The challenge arises from the fact that the system has not been thoroughly tested under these varied settings.”
Lindsay Chiswick, the MET’s intelligence director, dismissed Hussy’s allegations, stating: “This is a factual report from a globally renowned institution. The Met Police’s commentary is grounded in the findings of an independent study,” she explained.
After the newsletter promotion
“If you utilize LFR with a setting of 0.64, as I currently am, there is no statistically significant bias.”
“We sought research to pinpoint where potential bias lies within the algorithm and employed the results to mitigate that risk.”
“The findings exemplify the degree to which algorithms can be used to minimize bias, and we consistently operate well above that threshold.”
During the Notting Hill carnival this weekend, warning signs will notify attendees about the use of LFR. The LFR system will be stationed next to the van containing the cameras linked to the suspect database.
Authorities believe utilizing the technology at two sites leading to the carnival will act as a deterrent. At the carnival itself, law enforcement is prepared to employ retrospective facial recognition to identify perpetrators of violence and assaults.
Fussey remarked: “Few question the police’s right to deploy technology for public safety, but oversight is crucial, and it must align with human rights standards.”
The MET claims that since 2024, the LFR has recorded a false-positive rate of one in every 33,000 cases. Although the exact number of scanned faces remains undisclosed, it is believed to be in the hundreds of thousands.
There have been 26 incorrect matches in 2024, with eight reported so far in 2025. The Met stated that these individuals were not apprehended as decisions on arrests rested with police officers, following matches produced by their computer systems.
Prior to the carnival, the MET arrested 100 individuals, recalled 21 to prison, and banned 266 from attendance. Additionally, they reported seizing 11 firearms and over 40 knives.
Source: www.theguardian.com
