Google Granted Special Status by Watchdog to Enforce Changes in UK Search Practices

Google is encountering mandatory changes in its search operations within the UK, following the competition regulator’s decision to grant the company special status and impose stricter regulations.

The Competition and Markets Authority (CMA) has confirmed that Google holds a “strategic market position” (SMS) in both search and search advertising. This classification indicates that the company wields sufficient market power to necessitate a unique regulatory framework.

The regulator now has the authority to mandate alterations in how Google conducts business in these sectors, as per new digital legislation. This announcement on Friday marks the first time a tech company has been recognized with an SMS designation.

The CMA has already indicated several potential changes, such as providing internet users with the option to select a different search service through a “choice screen.” This could include AI-driven competitors like Perplexity and ChatGPT among the available options.


The CMA is also looking to ensure equitable ranking of search results and to provide publishers with greater control over the usage of their content, including responses generated by AI. Features such as AI Overview and AI Mode powered by Google’s AI are also included under the SMS classification.

The CMA clarified that its ruling does not imply any wrongdoing and that no immediate actions will be enforced. However, this year it intends to initiate discussions regarding potential alterations to Google’s operations.

Will Hayter, executive director of digital markets at the CMA, asserted that enhancing competition in realms like search and search advertising—which involves advertisers paying to appear in users’ search results—could foster new business opportunities and stimulate investment throughout the UK economy.

He stated: “Over 90% of searches in the UK are executed on Google’s platform, underscoring Google’s continued strategic role in search and search advertising.” He added, “In response to the feedback we received post our proposed decision, we have today designated Google’s search service with a strategic market position.”

Oliver Bethel, Google’s senior director of competition, expressed concerns that this decision might jeopardize UK users’ access to emerging products and services.

He commented: “Several of the intervention ideas proposed in this process could hinder innovation and growth in the UK, potentially delaying product introductions at a time when AI-based advancements are rapidly progressing.”

Skip past newsletter promotions

Tom Smith, a competition lawyer at Geradin Partners and former CMA legal director, noted that there is a substantial case against Google.

He remarked: “There exists a clear basis for eliminating some of the market distortions caused by Google’s monopolistic stance. This has already been addressed in the US and EU. Today’s ruling empowers the CMA to take similar action.”

In a separate inquiry, the CMA is evaluating whether Google’s and Apple’s mobile platforms should be classified as SMS-enabled under the newly established digital regulatory framework outlined in the Digital Markets, Competition and Consumers Act 2024.

Source: www.theguardian.com

Meta Approves Crowdfunding Ads for IDF Drones and Unveils Consumer Watchdog Initiative

Meta serves ads on Facebook, Instagram, and Threads from pro-Israel organizations soliciting funds for military assets, including drones and tactical gear for Israeli Defense Forces battalions.

“We are Sheikh’s sniper team stationed in Gaza. We require a tripod to fulfill our mission at Jabaria,” states one Facebook ad that was first posted on June 11 and remains active as of July 17.

These sponsored advertisements were initially uncovered and reported to Meta by Ekō, a global consumer watchdog. They have identified at least 117 ads beginning in March 2025 that specifically requested donations for IDF military equipment. This marks the second instance an organization has highlighted an ad by the same publisher to Meta. In a prior assessment from December 2024, Ekō flagged 98 ads, urging the tech giant to take action against many of them. Nonetheless, the company has largely permitted publishers to initiate new campaigns with similar ads since then. The IDF itself has not made any public appeals for funding.

“This proves that Meta essentially accepts funding from anyone,” remarked Maen Hamad, a campaigner with Goku. “There appears to be minimal balance in the oversight that platforms are supposed to provide. If that’s the case, those measures are only implemented post-factum.”

In response, Ryan Daniels, a spokesperson for the social media company, stated that Meta has reviewed and eliminated ads violating company policy after receiving inquiries. Any advertisement related to social issues, elections, or politics must undergo an approval process and contain a disclaimer disclosing advertising payments, according to the company. These particular ads, however, did not meet that criterion.

These ads garnered at least 76,000 impressions, a metric indicating the number of times an ad is shown to users solely within the EU and the UK. The group was unable to ascertain the number of impressions in the US.

Skip past newsletter promotions

At least 97 recent advertisements are soliciting donations for specific models of private drones, many of which remain operational. A new investigation by +972 Magazine reveals that these drones are utilized by Israeli combat units to drop explosives on Palestinians. Although these quadcopters can be found on Amazon, IDF units often modify civilian drones sourced via Facebook groups, primarily produced by a Chinese company called Autel, at a fraction of the cost of military-grade drones. Several IDF soldiers spoke to +972 anonymously.

“Most of our drones are damaged and in disrepair. We have no replacements.” Another ad states. “Donate now. Every second counts and every drone can save lives.”

It remains unclear if these combat units leverage the funds received from these specific ads to purchase drones, but soldiers informed +972 that they have received donations, fundraisers, and inexpensive drones sourced through Facebook groups, manufactured by Autel.

Funding advertisements from Vaad Hatzedaka, one of the publishers flagged by Ekō, link to a donation webpage detailing the equipment being funded, which includes two Autel drones. Vaad Hatzedaka, a nonprofit organization, has set a fundraising target of $300,000 and has already secured over $250,000 for these drones and other assistance for various IDF units, according to the donation page. The second publisher, Mayer Malik, is an Israel-based singer-songwriter who has run ads directing to a landing page offering sponsorship avenues for various tactical gear, raising more than $2.2 million in total donations for the IDF.

Meta’s advertising policy strictly prohibits the promotion of donation requests for “firearms, firearm parts, ammunition, explosives, or lethal enhancements,” with limited exceptions. Meta has removed some recent ads and associated funding requests for military resources that were flagged earlier, primarily due to the absence of necessary disclaimers accompanying the ads. Social issues, elections, or political ads are subject to disclose requirements as stated in Meta’s Advertising Library.

According to Ekō, these advertisements may also breach certain provisions of the EU’s Digital Services Act (DSA). Under the DSA, platforms like Meta are required to eliminate content that contravenes national or EU legislation. In France and the UK, the laws restrict how charities can fund and the means by which they can support foreign military entities. For instance, in January 2025, the Charity Commissioner in the UK issued an official warning to a London charity that raised funds for IDF soldiers, stating that it was “neither legal nor acceptable.”

Source: www.theguardian.com

Surge in AI-Generated Child Exploitation Videos Online, Reports Watchdog

The quantity of online videos depicting child sexual abuse created by artificial intelligence has surged as advancements in technology have impacted pedophiles.

According to the Internet Watch Foundation, AI-generated abuse videos have surpassed a critical level, nearing a point where they can nearly measure “actual images,” with a notable increase observed this year.

In the first half of 2025, the UK-based Internet Safety Watchdog examined 1,286 AI-generated videos containing illegal child sexual abuse material (CSAM), a sharp increase from just two during the same period last year.

The IWF reported that over 1,000 of these videos fall under Category A abuse, the most severe classification of such material.

The organization indicated that billions have been invested in AI, leading to a widely accessible video generation model that pedophiles are exploiting.

“It’s a highly competitive industry with substantial financial incentives, unfortunately giving perpetrators numerous options,” stated an IWF analyst.

This video surge is part of a 400% rise in URLs associated with AI-generated child sexual abuse content in the first half of 2025, with IWF receiving reports of 210 such URLs compared to 42 last year.

IWF discovered one post on a Dark Web Forum where a user noted the rapid improvements in AI and how pedophiles had rapidly adapted to using an AI tool to “better interact with new developments.”

IWF analysts observed that the images seem to be created by utilizing free, basic AI models and “fine-tuning” these models with CSAM to produce realistic videos. In some instances, this fine-tuning involved a limited number of CSAM videos, according to IWF.

The most lifelike AI-generated abuse videos encountered this year were based on actual victims, the Watchdog reported.

Interim CEO of IWF, Derek Ray-Hill, remarked that the rapid advancement of AI models, their broad accessibility, and their adaptability for criminal purposes could lead to a massive proliferation of AI-generated CSAM online.

“The risk of AI-generated CSAM is astonishing, leading to a potential flood that could overwhelm the clear web,” he stated, cautioning that the rise of such content might encourage criminal activities like child trafficking and modern slavery.

The replication of existing victims of sexual abuse in AI-generated images allows pedophiles to significantly increase the volume of CSAM online without having to exploit new victims, he added.

The UK government is intensifying efforts to combat AI-generated CSAM by criminalizing the ownership, creation, or distribution of AI tools designed to produce abusive content. Those found guilty under this new law may face up to five years in prison.

Additionally, it is now illegal to possess manuals that instruct potential offenders on how to use AI tools for creating abusive images or for child abuse. Offenders could face up to three years in prison.

In a February announcement, Interior Secretary Yvette Cooper stated, “It is crucial to address child sexual abuse online, not just offline.”

AI-generated CSAM is deemed illegal under the Protection Act of 1978, which criminalizes the production, distribution, and possession of “indecent or false images” of children.

Source: www.theguardian.com

As Watchdog Acts, Google May Be Required to Alter UK Search Practices

Google may be compelled to implement a range of modifications in its search operations, including allowing internet users to select alternative services, following suggestions from the UK’s competition regulator to strengthen regulations on the company.

The Competition and Markets Authority (CMA) is set to classify the leading search engine as having “strategic market status,” a designation that empowers regulators to impose stricter controls on major tech firms deemed to hold substantial market influence.

The CMA expressed its intention to introduce tailored regulatory measures for U.S. companies, which may include offering users a “selection screen” to ensure a fair ranking of search results, thereby gaining more oversight on content usage, including AI-generated responses.


Should the CMA finalize its decision in October, Google will be the first company subjected to new regulatory powers established this year.

CMA CEO Sarah Cardell highlighted that this announcement signifies a “major milestone” in the newly enacted regulatory framework stemming from recent digital market, competition, and consumer legislation.

Cardell remarked, “These proportionate measures will create greater opportunities for UK businesses and consumers, providing them with more choices and control over their engagement with Google’s search services, as well as fostering innovation within the UK’s tech industry and the economy at large.”

Google has stated that this move could significantly impact businesses and consumers in the UK.

Skip past newsletter promotions

“We are worried that the breadth of CMA’s considerations is excessive and unfocused, and that various interventions are being contemplated prior to the collection of sufficient evidence,” stated Oliver Bethell, senior director at Google.

Source: www.theguardian.com

Former UK Amazon CEO as Competition Watchdog is a “Slap in the Face,” According to Labor Union

Trade unions and consumer activists have criticized the appointment of Amazon’s former chief executive as the head of Britain’s competition watchdog, calling it a “slap in the face to workers” and “Trumpian.” The government hired Doug Gurr, former Amazon UK and China boss, to chair the Competition and Markets Authority (CMA), leading to accusations of favoritism towards big tech.

Business Secretary Justin Madders defended the decision, stating that it was aimed at boosting economic growth. Gurr replaces Markus Bockelink and will serve as interim chair for up to 18 months. The CMA will focus on investigating technology companies under the new digital market competition regime to increase competition.

Critics like GMB national secretary Andy Prendergast and campaigner Rob Harrison have raised concerns about Gurr’s ties to Amazon and the potential bias in regulating technology monopolies. However, government officials maintain that the CMA will uphold its operational independence and protect consumer interests.

Amazon, known for its dominance in online sales, has faced criticism for its treatment of workers and market practices. The company has pledged to ensure worker rights and dignity. The appointment of Gurr has sparked debates over conflict of interest and regulatory oversight of tech giants like Amazon, Google, and Facebook.

Antitrust watchdogs and consumer groups have expressed concerns about the impact of Gurr’s appointment on economic growth and innovation. The Open Market Institute (OMI) criticized the move as a strategic failure that could harm UK’s competitiveness in the tech sector.

Despite the backlash, government officials defend the decision, stating that it is necessary to balance consumer protection and growth. Gurr’s background as an Amazon executive has raised questions about his ability to regulate the tech industry effectively.

Gurr’s appointment comes after disagreements over the CMA’s approach to growth, leading to the replacement of Bockelink. Regulators like Nikhil Rati of the Financial Conduct Authority have emphasized that they are acting on government directives to ensure compliance and customer protection.

The CMA and Gurr have been approached for comment on the matter. Additional reporting by Kalyeena Makortoff and Sarah Butler.

Source: www.theguardian.com

UK watchdog examining Google’s search dominance

Google is currently under investigation by Britain’s competition watchdog regarding the effects of its search and advertising practices on consumers, news publishers, businesses, and other search engines.

The Competition and Markets Authority reports that Google dominates over 90% of general searches in the UK.

The CMA estimates that search advertising costs UK households nearly £500 annually, but competition can help lower this cost.


The CMA has announced an investigation to determine if Google is hindering competition in the market and engaging in potentially exploitative practices, such as collecting large amounts of consumer data without informed consent.

Additionally, the investigation will assess if Google is unfairly promoting its shopping and travel services using its dominant search engine position.

The investigation is expected to last up to nine months, during which Google will be required to share data with other companies and provide publishers with more control over their content.

This investigation marks the first under the new digital market competition regulations in the UK, enabling authorities to impose conduct requirements on technology companies.

Pressure from the US to regulate tech companies has been increasing leading up to President Trump’s inauguration. Meta founder Mark Zuckerberg criticized European laws and expressed intentions to work with the new US administration to protect American companies.

British Prime Minister Keir Starmer has plans to integrate AI into the UK economy and establish partnerships with AI companies with a pro-growth approach to regulation.

The EU is reportedly reevaluating its investigations into US tech giants, including Google, Meta, and Apple, under digital market regulations, potentially altering the scope of the probes.

Skip past newsletter promotions

The CMA’s investigation will examine the impact of Google’s search, advertising platform, and AI assistant.

CMA Chief Executive Sara Cardel emphasized the importance of fair competition and consumer rights in search services and data privacy.

Google has responded by stating that search is crucial for economic growth and they will collaborate with the CMA to ensure compliance with new regulations.

Source: www.theguardian.com

Watchdog accuses Google of employing anti-competitive tactics in UK ad market

Britain’s competition watchdog has accused Google of anti-competitive behavior in the market for buying and selling advertising on websites, following similar investigations in the US and EU.

The Competition and Markets Authority (CMA) said it had found that Google had “abused its dominant position” in online advertising, to the detriment of thousands of UK publishers and advertisers.

The CMA said that while the majority of publishers and advertisers use Google’s advertising technology services to bid for and sell advertising space, Google is preventing its rivals from offering a competitive alternative.

Regulators are focusing on Google’s role in three areas: owning two tools for buying ad space, running an advertising platform that allows publishers to manage their ad space online, and managing AdX, an ad exchange that brings together advertisers and publishers in a way that matches buyers and sellers in the stock market.

“The CMA is concerned that Google is actively using its dominance in this sector to favor its own services,” the watchdog said. “Google is putting competitors at a disadvantage and preventing them from competing on a level playing field to offer publishers and advertisers better, more competitive services that will help them grow their businesses.”

In its interim findings published on Friday, the CMA found that Google abused its dominant market position by using its own buying tools and inventory tools for publishers to bolster its own ad trading position and protect it from competition since 2015. The CMA also alleged that Google blocked rival ad inventory tools (called publisher ad servers) from effectively competing with its own product, DoubleClick for Publishers.

The CMA will consider Google’s response before making a final decision.

Regulators can impose fines of up to 10% of a company’s global turnover depending on the severity of the violations, and can also issue legally binding directions to end the violations.

In a statement, Google said the CMA’s arguments were “flawed”.

“Our ad tech tools help websites and apps fund their content and help businesses of all sizes effectively reach new customers,” said Dan Taylor, Google’s vice president of global advertising. “At the heart of this lawsuit is a misinterpretation of the ad tech sector. We disagree with the CMA’s position and will respond accordingly.”

The U.S. Department of Justice and the European Commission are also investigating Google’s ad tech activities: In June 2023, EU regulators said Google may have to sell parts of its ad tech business to address concerns, while the U.S. Department of Justice is set to accuse Google in court on Monday of monopolizing the ad tech market.

Last month, a federal court ruled that Google was illegally monopolizing the internet search market, a decision that could lead to a partial breakup of the company’s business.

Source: www.theguardian.com

Apple accused by UK watchdog of not reporting child sexual images

Child safety experts have claimed that Apple lacks effective monitoring and scanning protocols for child sexual abuse materials on its platforms, posing concerns about addressing the increasing amount of such content associated with artificial intelligence.

The National Society for the Prevention of Cruelty to Children (NSPCC) in the UK has criticized Apple for underestimating the prevalence of child sexual abuse material (CSAM) on its products. Data obtained by the NSPCC from the police shows that perpetrators in England and Wales use Apple’s iCloud, iMessage, and FaceTime for storing and sharing more CSAM than in all other reported countries combined.

Based on information collected through a Freedom of Information request and shared exclusively with The Guardian, child protection organizations discovered that Apple was linked to 337 cases of child abuse imagery offenses recorded in England and Wales between April 2022 and March 2023. In 2023, Apple reported only 267 suspected instances of child abuse imagery globally to the National Centre for Missing and Exploited Children (NCMEC), contrasting with much higher numbers reported by other leading tech companies, with Google submitting over 1.47 million and Meta reporting more than 30.6 million, as per NCMEC reports mentioned in the Annual Report.

All US-based technology companies are mandated to report any detected cases of CSAM on their platforms to the NCMEC. Apple’s iMessage service is encrypted, preventing Apple from viewing user messages, similar to Meta’s WhatsApp, which reported about 1.4 million suspected CSAM cases to the NCMEC in 2023.

Richard Collard, head of child safety online policy at NSPCC, expressed concern over Apple’s discrepancy in handling child abuse images and urged the company to prioritize safety and comply with online safety legislation in the UK.

Apple declined to comment but referenced a statement from August where it decided against implementing a program to scan iCloud photos for CSAM, citing user privacy and security as top priorities.

In late 2022, Apple abandoned plans for an iCloud photo scanning tool called Neural Match, which would have compared uploaded images to a database of known child abuse images. This decision faced opposition from digital rights groups and child safety advocates.

Experts are worried about Apple’s AI system, Apple Intelligence, introduced in June, especially as AI-generated child abuse content poses risks to children and law enforcement’s ability to protect them.

Child safety advocates are concerned about the increase in AI-generated CSAM reports and the potential harm caused by such images to survivors and victims of child abuse.

Sarah Gardner, CEO of Heat Initiative, criticized Apple’s insufficient efforts in detecting CSAM and urged the company to enhance its safety measures.

Child safety experts worry about the implications of Apple’s AI technology on the safety of children and the prevalence of CSAM online.

Source: www.theguardian.com

UK watchdog may thwart big tech companies’ ambitions for AI dominance.

“MMonopoly is Silicon Valley’s answer to Darth Vader and is “a condition of all successful business,” said Peter Thiel. This aspiration is widely shared by Valley giant Gamman, his new acronym for Google, Apple, Microsoft, Meta, Amazon, and Nvidia. And with the advent of AI, each one’s desire to reach that blessed state before others gets there is even greater.

One sign of their anxiety is that they are spending insane amounts of money on the 70-odd generative AI startups that have proliferated since it became clear that AI was going to be the new thing. Microsoft, for example, reportedly spent $13bn (about £10.4bn) on OpenAI, while leading a $1.3bn funding round for DeepMind co-founder Mustafa Suleiman’s startup Inflection. He was also an investor. Amazon invested $4 billion in Anthropic, a startup founded by refugees from OpenAI. Google invested $500 million in the same business, he pledged an additional $1.5 billion, and he invested an unknown amount in A121 Labs and Hugging Face.
(Yes, I know the name doesn’t mean anything.) Microsoft also invested in his French AI startup, Mistral. and so on. In 2023, only $9 billion of the $27 billion invested in AI startups was invested. From a venture capitalist company –Until recently, the company was by far the largest funder of emerging technology companies in Silicon Valley.

what’s happening? After all, the big tech companies have their own “fundamental” AI models and don’t need what smaller companies have built or are building. And every penny drops. We’ve seen this strategy before. An existing company discovers and captures potential competitors at an early stage. For example, Google acquired YouTube in his 2006. Facebook acquired Instagram for $1 billion in 2012 when it had only 13 employees, and WhatsApp in 2014 (for $19 billion, which seemed an exorbitant amount at the time).

With the 20/20 vision of hindsight, we now see that these were all anti-competitive acquisitions that should have been resisted at the time and were not. That’s why it’s so refreshing to know that at least one regulator, the UK’s Competition and Markets Authority (CMA), seems determined to learn from its history.

in Speech given at a gathering of American antitrust lawyers Just over a week ago in Washington, CMA CEO Sara Cardel called for ensuring the market for fundamental AI models is supported by fair, open and effective competition and strong consumer protections. announced that he had decided to do so. Her concern is that the growing presence of a few large incumbents across the AI ​​value chain (the series of steps required to turn inputs into usable outputs) will undermine competition and limit companies’ options. This meant that there was a possibility that these markets could be formed in a way that degraded quality. and consumers.

She cited three major risks to competition. One is that companies that control critical inputs for developing the underlying model may restrict access to protect themselves from competition. Powerful incumbents may exploit their positions in consumer and business markets to limit competition in model deployment and thereby distort choice. And we believe that partnerships between key players have the potential to strengthen or expand existing market power across the value chain.

He also said the CMA would take action to assess and mitigate competition risks from new technologies through its formidable investigatory powers, including merger control reviews, market investigations and possible designations under new digital competition laws. I warned you.

It was truly amazing to hear a major regulator speak like this about the technology industry. Cardel said the CMA will be a technology industry that believes in being proactive and (as is often said) moving quickly to break things, rather than waiting for problems to arise before acting. He suggested that he would try to stay ahead of the big players rather than lag behind them. He said the CMA is already preparing for this task based on what it has learned so far from adapting to technology platforms. Rather than focusing only on individual parts of the chain, the value of AI model deploymenthe aims to look at the entire chain holistically. It also plans to use its merger review powers more aggressively to assess the impact of alliances and AI investments on competition.

Isn’t that exciting? But in some ways it is no surprise as it is one of the few British institutions that seems able to use the post-Brexit freedoms as an opportunity for creativity and innovation. And bigwigs who are tempted to dismiss Cardel’s speech as mere fiery rhetoric should reflect on the CMA’s recent track record. A thorough investigation into Microsoft’s acquisition of Activision Blizzard, for example; or how Meta forced the sale of Giphy, an online database and search engine that allows users to find and share animated GIF files. Cardel may be lower profile than her U.S. FTC counterpart Lina Khan, but it’s clear she means business. People with strong possessiveness should be careful.

Source: www.theguardian.com

Terrorism watchdog slams WhatsApp for allowing UK users as young as 13

Criticism has been directed at Mark Zuckerberg’s meta by Britain’s terror watchdog for reducing the minimum age for WhatsApp users from 16 to 13. This move is seen as “unprecedented” and is expected to expose more teenagers to extremist content.

Jonathan Hall KC expressed concerns about the increased access to unregulated content, such as terrorism and sexual exploitation, that meta may not be able to monitor.


Jonathan Hall described the decision as “unusual”.

According to Mr. Hall, the use of end-to-end encryption by WhatsApp has made it difficult for meta to remove harmful content, contributing to the exposure of younger users to unregulated materials.

He highlighted the vulnerability of children to terrorist content, especially following a spike in arrests among minors. This exposure may lead vulnerable children to adopt extremist ideologies.

WhatsApp implemented the age adjustment in the UK and EU in February, aligning with global standards and implementing additional safeguards.

Despite the platform’s intentions, child safety advocates criticized the move, citing a growing need for tech companies to prioritize child protection.

The debate over end-to-end encryption and illegal content on messaging platforms has sparked discussions on online safety regulations, with authorities like Ofcom exploring ways to address these challenges.

Skip past newsletter promotions

The government has clarified that any intervention by Ofcom regarding content scanning must meet privacy and accuracy standards and be technically feasible.

In a related development, Meta announced plans to introduce end-to-end encryption to Messenger and is expected to extend this feature to Instagram.

Source: www.theguardian.com

Leisure centers abandon biometric monitoring of staff as UK data watchdog cracks down

Numerous companies, including a national leisure center chain, are reassessing or discontinuing the use of facial recognition technology and fingerprint scanning for monitoring employee attendance in response to actions taken by Britain’s data authority.

The Information Commissioner’s Office (ICO) instructed a Serco subsidiary to halt the use of biometrics for tracking employee attendance at its leisure centers and prohibited the use of facial recognition and fingerprint scans. The ICO also issued stricter guidelines.

Following an investigation, the ICO found that more than 2,000 employees’ biometric data was unlawfully processed at 38 Serco-managed centers using facial recognition and, in two instances, fingerprint scanning to monitor attendance.

In response, Serco has been given a three-month deadline by the ICO to ensure compliance with regulations and has committed to achieving full compliance within that timeframe.

Other leisure center operators and businesses are also reevaluating or discontinuing the use of similar biometric technology for employee attendance monitoring in light of the ICO’s actions.

Virgin Active, a leisure club operator, announced the removal of biometric scanners from 32 properties and is exploring alternatives for staff monitoring.

Ian Hogg, CEO of Shopworks, a provider of biometric technology to Serco and other companies, highlighted the ICO’s role in assisting businesses in various industries to meet new standards for biometric authentication.

The new ICO standards emphasize exploring alternative options to biometrics for achieving statutory objectives, prompting companies to reconsider their use of such technology.

1Life, owned by Parkwood Leisure, is in the process of removing the Shopworks system from all sites, clarifying that it was not used for biometric purposes.

Continuing discussions with stakeholders, the ICO aims to guide appropriate use of facial recognition and biometric technology in compliance with regulations and best practices.

The widespread concerns raised by the ICO’s actions underscore the need for stronger regulations to protect employees from invasive surveillance technologies in the workplace.

The case of an Uber Eats driver facing issues with facial recognition checks highlights ongoing debates about the use of artificial intelligence in employment relationships and the need for transparent consultation processes.

Skip past newsletter promotions

Emphasizing the importance of respecting workers’ rights, the use of artificial intelligence in employment must be carefully regulated to prevent discriminatory practices and ensure fair treatment of employees.

Source: www.theguardian.com