Leisure centers abandon biometric monitoring of staff as UK data watchdog cracks down

Numerous companies, including a national leisure center chain, are reassessing or discontinuing the use of facial recognition technology and fingerprint scanning for monitoring employee attendance in response to actions taken by Britain’s data authority.

The Information Commissioner’s Office (ICO) instructed a Serco subsidiary to halt the use of biometrics for tracking employee attendance at its leisure centers and prohibited the use of facial recognition and fingerprint scans. The ICO also issued stricter guidelines.

Following an investigation, the ICO found that more than 2,000 employees’ biometric data was unlawfully processed at 38 Serco-managed centers using facial recognition and, in two instances, fingerprint scanning to monitor attendance.

In response, Serco has been given a three-month deadline by the ICO to ensure compliance with regulations and has committed to achieving full compliance within that timeframe.

Other leisure center operators and businesses are also reevaluating or discontinuing the use of similar biometric technology for employee attendance monitoring in light of the ICO’s actions.

Virgin Active, a leisure club operator, announced the removal of biometric scanners from 32 properties and is exploring alternatives for staff monitoring.

Ian Hogg, CEO of Shopworks, a provider of biometric technology to Serco and other companies, highlighted the ICO’s role in assisting businesses in various industries to meet new standards for biometric authentication.

The new ICO standards emphasize exploring alternative options to biometrics for achieving statutory objectives, prompting companies to reconsider their use of such technology.

1Life, owned by Parkwood Leisure, is in the process of removing the Shopworks system from all sites, clarifying that it was not used for biometric purposes.

Continuing discussions with stakeholders, the ICO aims to guide appropriate use of facial recognition and biometric technology in compliance with regulations and best practices.

The widespread concerns raised by the ICO’s actions underscore the need for stronger regulations to protect employees from invasive surveillance technologies in the workplace.

The case of an Uber Eats driver facing issues with facial recognition checks highlights ongoing debates about the use of artificial intelligence in employment relationships and the need for transparent consultation processes.

Skip past newsletter promotions

Emphasizing the importance of respecting workers’ rights, the use of artificial intelligence in employment must be carefully regulated to prevent discriminatory practices and ensure fair treatment of employees.

Source: www.theguardian.com

Meta cracks down on deceptive content by pushing for labeling of all AI images on Instagram and Facebook

Meta works to identify and label AI-generated images on Facebook, Instagram, and Threads, and is striving to expose “people and organizations that actively seek to deceive the public.” Masu.

Images created using Meta’s AI image tools are already labeled as AI, but Nick Clegg, the company’s global president, stated in a blog post on Tuesday that the company’s competing services will start labeling AI-generated images.

Meta’s AI images already have metadata and an invisible watermark indicating that the image was created by AI. The company has partnered with Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock to work on AI image generators, according to Clegg.

Clegg said, “As the line between human content and synthetic content becomes blurred, people want to know where the line is.”

He added, “People often encounter AI-generated content for the first time, and our users appreciate the transparency around this new technology. It’s important to let people know that it was created using AI.”

A surfing llama or an AI? Image labels for AI-generated content on Facebook.

Clegg mentioned that the labeling feature is being developed and will be rolled out to all languages in the coming months.

He also stated that the company will add more prominent labels on images, videos, or audio that are “digitally created or altered” and “have a particularly high risk of materially misleading the public.”

Additionally, the company is working to develop technology to automatically detect AI-generated content, even when the content lacks invisible markers or has been removed.

“This work is particularly important because the online space is likely to become increasingly hostile in the coming years,” Mr Clegg said.

He concluded, “People and organizations actively trying to deceive people with AI-generated content will find ways to circumvent the safeguards in place to detect it. Our industry and society as a whole must continue to find ways to stay ahead of the curve.”

AI deepfakes have already become an issue in the US presidential election cycle, with examples of AI-generated deepfakes used to dissuade voters in the New Hampshire Democratic primary.

Australia’s Nine News also faced criticism for altering an image broadcast on the evening news that exposed Victorian Animal Justice Party MP Georgie Purcell’s belly button and altered her chest, using Adobe’s AI image tools.

Source: www.theguardian.com