Home Office Acknowledges Issues with Facial Recognition Technology for Black and Asian Individuals

Ministers are under pressure to implement more robust safeguards for facial recognition technology, as the Home Office has acknowledged that it may mistakenly identify Black and Asian individuals more frequently than white people in certain contexts.

Recent tests conducted by the National Physical Laboratory (NPL) on how this technology functions within police national databases revealed that “some demographic groups are likely to be incorrectly included in search results,” according to the Home Office.

The Police and Crime Commissioner stated that the release of the NPL’s results “reveals concerning underlying bias” and urged caution regarding plans for a nationwide implementation.

These findings were made public on Thursday, shortly after Police Minister Sarah Jones characterized the technology as “the most significant advancement since DNA matching.”

Facial recognition technology analyzes individuals’ faces and cross-references the images against a watchlist of known or wanted criminals. It can be employed to scrutinize live footage of people passing in front of cameras, match faces with wanted persons, or assist police in targeting individuals on surveillance.

Images of suspects can be compared against police, passport, or immigration databases to identify them and review their backgrounds.

Analysts who evaluated the Police National Database’s retrospective facial recognition tool at lower settings discovered that “white subjects exhibited a lower false positive identification rate (FPIR) (0.04%) compared to Asian subjects (4.0%) and Black subjects (5.5%).”

Further testing revealed that Black women experienced notably high false positives. “The FPIR for Black male subjects (0.4%) is lower than that for Black female subjects (9.9%),” the report detailed.

The Police and Crime Commissioners Association stated that these findings reflect internalized bias. “This indicates that, in certain scenarios, Black and Asian individuals are more prone to incorrect matches than their white counterparts. Although the terminology is technical, it is evident that this technology is being integrated into police operations without adequate safeguards,” the report noted.

The statement, signed by APCC leaders Darryl Preston, Alison Rowe, John Tizard, and Chris Nelson, raised concerns why these findings were not disclosed sooner and shared with Black and Asian communities.

The report concluded: “While there is no evidence of adverse effects in individual cases, this is due to chance rather than a systematic approach. System failures have been known for a while, but the information was not conveyed to the communities impacted and key stakeholders.”

The government has initiated a 10-week public consultation aimed at facilitating more frequent usage of the technology. The public will be asked if police should have permission to go beyond records and access additional databases, such as images from passports and driving licenses, to track criminals.

Civil servants are collaborating with police to create a new national facial recognition system that will house millions of images.

Skip past newsletter promotions

Charlie Welton, head of policy and campaigns at Liberty, stated: “The racial bias indicated by these statistics demonstrates that allowing police to utilize facial recognition without sufficient safeguards leads to actual negative consequences. There are pressing questions regarding how many individuals of color were wrongly identified in the thousands of monthly searches utilizing this biased algorithm and the ramifications it might have.”

“This report further underscores that this powerful and opaque technology cannot be deployed without substantial safeguards to protect all individuals, which includes genuine transparency and significant oversight. Governments must halt the accelerated rollout of facial recognition technology until protections are established that prioritize our rights, aligning with public expectations.”

Former cabinet minister David Davis expressed worries after police officials indicated that cameras could be installed at shopping centers, stadiums, and transport hubs to locate wanted criminals. He told the Daily Mail: “Brother, welcome to the UK. It is evident that the Government is implementing this dystopian technology nationwide. There is no way such a significant measure could proceed without a comprehensive and detailed discussion in the House of Commons.”

Officials argue that the technology is essential for apprehending serious criminals, asserting that there are manual safeguards embedded within police training, operational guidelines, and practices that require trained personnel to visually evaluate all potential matches derived from the police national database.

A Home Office representative said: “The Home Office takes these findings seriously and has already acted. The new algorithm has undergone independent testing and has shown no statistically significant bias. It will be subjected to further testing and evaluation early next year.”

“In light of the significance of this issue, we have requested the Office of the Inspector General and the Forensic Regulator to review the application of facial recognition by law enforcement. They will evaluate the effectiveness of the mitigation measures, and the National Council of Chiefs of Police backs this initiative.”

Source: www.theguardian.com

UK Minister Acknowledges TikTok’s Appeal Yet Expresses ‘Genuine Concerns’

TikTok’s ability to provide “uplifting” content and its impact on UK-China relations have raised concerns for the UK government regarding the use of data of millions of Britons, according to the technology secretary. The acceptance of video apps is being shaped by these concerns, the secretary stated.

After a US court upheld legislation that could potentially result in TikTok being banned or sold in the US, Peter Kyle expressed his worries about the platform’s data usage in relation to ownership models. “I am genuinely concerned about their use of data in relation to ownership models,” he told the Guardian.

However, following President Donald Trump’s executive order temporarily suspending the US ban for 75 days, Kyle referred to TikTok as a “desirable product” that enables young people to embrace different cultures and ideologies freely. He emphasized the importance of exploring new things and finding the right balance between the euphoria TikTok offers and potential concerns about Chinese propaganda.

A recent study from Rutgers University indicated that heavy users of TikTok in the US demonstrated an increase in pro-China attitudes by around 50%. There are fears that the Chinese government could access the data collected by the app. TikTok claimed to use moderation algorithms to remove content related to alleged abuses by the Chinese Communist Party and the suppression of anti-China material.

The study concluded that TikTok’s content aligns with the Chinese Communist Party’s goal of shaping favorable perceptions among young viewers, potentially influencing users through psychological manipulation. It described TikTok as a “flawed experiment.”

In response to these findings, Kyle urged caution when using TikTok, highlighting the presence of bias in editorial decisions made by various platforms and broadcasters. He emphasized the government’s commitment to monitoring social media trends and taking action if necessary to safeguard national security.

When asked about concerns regarding TikTok as a propaganda tool, Kyle stated that any actions taken by the government would be made public. He also mentioned being mindful of China’s relationships with other countries, clarifying that his comments were not specifically directed at China.

Regarding the ban on TikTok in the US, Kyle noted the potential risks associated with using the Chinese version of the app, which could involve data collection and the dissemination of propaganda. He expressed concerns about the implications of such actions.

Skip past newsletter promotions

A representative from TikTok emphasized that the UK app is operated by a UK-registered and regulated company, investing £10bn to ensure user data protection in the UK and Europe through independent monitoring and verification of data security.

The Chinese government stated that it does not hold shares or ownership in ByteDance, TikTok’s parent company, which is majority-owned by foreign investors. The founder, Zhang Yiming, owns 20% of the company.

In 2018, Mr. Zhang posted a “self-confession” announcing the shutdown of an app due to content conflicting with core socialist values and failing to guide public opinion properly. Following criticism on state television, he acknowledged corporate weaknesses and the need for a better understanding and implementation of political theories promoted by Chinese Communist Party leader Xi Jinping.

Source: www.theguardian.com

Google CEO acknowledges that AI tool’s lack of photo diversity is causing offense to users

The CEO of Google expressed concern over some responses from the company’s Gemini artificial intelligence model, calling them “unlikely” and pointing out issues such as depicting German World War II soldiers as people of color. He described this bias as “totally unacceptable.”

In a memo to employees, Sundar Pichai acknowledged that images and text generated by modern AI tools were causing discomfort.

Social media users highlighted instances where Gemini image generators depicted historical figures of different ethnicities and genders, including the Pope, the Founding Fathers, and Vikings. Google suspended Gemini’s ability to create people images in response.

One example involved Gemini’s chatbot responding to a question about negative social impacts, leading to a discussion about Elon Musk and Hitler. Pichai addressed this issue, calling the responses upsetting and indicative of bigotry.

Viking AI image Photo: Google Gemini

Pichai stated that Google’s teams were working to improve these issues and have already made significant progress. AI systems often generate biased responses due to training data issues, reflecting larger societal problems.

Gemini’s competitors are also working on addressing bias in AI models. New versions of AI generators like Dall-E prioritize diverse representation and aim to mitigate technical issues.

Google is committed to making structural changes and enhancing product guidelines to address biases. Pichai emphasized the importance of providing accurate and unbiased information to users.

Elon Musk criticized Google’s AI programs, pointing out the bias in generated images. Technology commentator Ben Thompson called for a shift in decision-making at Google to prioritize good product development.

The emergence of generative AI platforms like OpenAI’s ChatGPT presents a competitive landscape in AI development. Google’s Gemini AI chatbot, formerly known as Bard, offers paid subscriptions for enhanced AI capabilities.

Google DeepMind continues to innovate in AI, with breakthroughs like the AlphaFold program for predicting protein structures. The CEO of DeepMind acknowledged the need to improve diversity in AI-generated images.

Source: www.theguardian.com