Police Encounter Confused Gang Suspected of International Smuggling Linked to UK’s “Large” Phone Heist

Police have disrupted an international network believed to be smuggling tens of thousands of stolen phones from the UK, marking their most significant effort against phone theft in London, according to law enforcement officials.

The criminal organization is thought to have smuggled as many as 40,000 stolen mobile phones from the UK to China in the past year, claiming that up to 40% of all mobile phones stolen in the capital were involved.

The police initiated Operation Echosteep in December 2024 after intercepting a shipment containing about 1,000 iPhones destined for Hong Kong at a warehouse located near Heathrow Airport.


According to police, nearly all the recovered phones had been reported stolen.

Authorities intercepted additional shipments and utilized forensic evidence from the packages to identify suspects.

After apprehending a man with 10 stolen mobile phones at Heathrow on September 20, he was charged with possession of stolen goods, the police unit reported.

During the investigation, officers also found two iPads, two laptops, and two Rolex watches.

Further investigation indicated that the same individual had made over 200 trips between London and Algeria in the past two years, according to police.

Three days later, two other men in their 30s were arrested in northeast London on suspicion of possessing stolen property.

Numerous mobile phones were discovered in vehicles, with approximately 2,000 additional devices located at properties linked to the suspects.

These individuals were subsequently charged and detained, police confirmed.

Additionally, two more men in their 30s were arrested on September 25 on allegations of money laundering and handling stolen goods.

Officers also seized several stolen devices during their search operations.

Police mentioned that one man had indicated that further investigations were ongoing.


In total, officers have arrested 46 individuals over two weeks, including 11 arrests related to a criminal gang involved in the theft of new iPhone 17 delivery vans.

An additional 15 arrests were made last week on suspicions of theft, handling stolen goods, and conspiracy to commit theft, according to the Metropolitan Police.

More than 30 suspicious devices were also uncovered while searching 28 locations in London and Hertfordshire.

London Mayor Sadiq Khan expressed gratitude to the police for “addressing concerns in London,” noting a 13% and 14% decrease in crime rates this year.

“This operation is undeniably the largest of its kind in British history, and it was humbling to witness the Met’s efforts in targeting leaders of international smuggling operations as well as street-level robbers,” Khan commented.

However, he urged the mobile phone industry to collaborate with law enforcement to make it challenging for smugglers to utilize stolen devices.

“Criminals are profiting millions by reusing stolen mobile phones and selling them abroad, granting others access to cloud services,” he remarked. “The current situation is simply too simple and too lucrative.

“We will persist in urging the mobile phone industry to take rapid action to prevent this crime by making it impossible to use stolen devices.”

“To effectively combat this issue and create a safer London for all, we require coordinated global action.”

“We are pleased to report that we have made significant progress in understanding the importance of these efforts,” stated Det Insp Mark Gavin, Senior Investigation Officer at Operation Echosteep.

Gavin highlighted that smugglers are particularly targeting Apple products due to their high profitability overseas, with handsets fetching up to £300 and stolen devices selling for as much as $5,000 (£3,710) in China.

This increase in phone theft is mirrored in numerous cities globally, with around 80,000 devices reported stolen in London last year, according to the Met.

Commander Andrew Featherstone, the Met’s lead on phone theft, stated:

Source: www.theguardian.com

California State Police Confounded by Tickets Issued to Driverless Cars for Illegal U-Turns

If a vehicle makes an unlawful U-turn without a driver in the seat, will it still incur a fine? This intriguing question was recently tackled by the California police department.

While conducting DUI enforcement, San Bruno officials encountered a self-driving car that performed an illegal U-turn, yet had no one behind the wheel. In a post from the San Bruno Police Station on Saturday, it was noted that police redirected traffic after halting the identifiable white vehicle from Waymo, the leading autonomous car service in the San Francisco Bay Area.

“We couldn’t issue citations as there was no human operator (our guidelines do not cover ‘robots’).” The post stated.

The department alerted Waymo about the incident, expressing hope that future programming updates will help avoid similar violations.

In a response, Waymo affirmed that its autonomous system, referred to as Waymo drivers, is “engineered to adhere to traffic laws.”

“We are evaluating this incident and remain dedicated to enhancing road safety through continuous learning and experience,” the statement sent to the Guardian read.

Skip past newsletter promotions

Last year, California Governor Gavin Newsom signed legislation allowing police to issue a “Notice of Violation” if an unmanned vehicle breaks traffic laws. This law will be effective starting July 2026, and it mandates businesses to install emergency communication lines for first responders.

The bill, proposed by San Francisco council member Phil Ting, came in response to multiple incidents within the city that could obstruct traffic, endanger pedestrians, and interfere with emergency responses.

The new law empowers first responders to direct companies to relocate self-driving cars away from an area, requiring them to respond within two minutes.

Addressing concerns regarding leniency from officers, San Bruno police reaffirmed that “there is a statute allowing officers to issue notifications to companies.”

Initially launched as a project under Google’s X Research Lab in 2009, Waymo Cars operate using a combination of external cameras and sensors. The company has encountered its share of challenges in the past, having to recall over 1,200 vehicles earlier this year due to software glitches leading to collisions with barriers and other stationary objects. The National Highway Traffic Safety Bureau has also initiated an investigation last year after receiving reports of 22 incidents involving Waymo vehicles acting erratically or breaching traffic safety laws.

Source: www.theguardian.com

Musk’s Grok AI Bot Misidentifies Footage of Police Misconduct at London Far-Right Rally

The metropolitan police were required to address the inaccurate claims generated by artificial intelligence on Elon Musk’s X platform. As a result, they released footage from the far-right rally that took place in the city since 2020.

Chatbot Grok claimed to provide answers to users on X about the location and timing of police footage depicting clashes with the crowd.


Despite Grok’s history of providing inaccurate information, it was noted that “the footage appears to show a confrontation between police and protesters over restrictions on September 26, 2020, during an anti-lockdown demonstration at Trafalgar Square in London.”

The response was quickly amplified on X, with Daily Telegraph columnist Allison Pearson tweeting, “This aligns with my suspicions.”

The Met responded to her, clarifying that the footage was captured before 3pm at the junction of Whitehall and Horseguard Avenue.

“It is clearly not Trafalgar Square, as suggested by the AI response you referenced. To eliminate confusion, we provided labeling comparisons to verify the location,” the force added.

This exchange illustrates the challenges police face from social media platforms, occurring on a day when 26 officers sustained injuries amid violence. Elon Musk was present at a rally organized by far-right activists affiliated with Tommy Robinson.

Musk faced criticism for his remarks, which were conveyed to Robinson via live link. The billionaire told the audience, “violence is coming,” asserting, “You will either fight back or perish.”

Liberal Democratic leader Ed Davy stated: “Elon Musk incited violence on our streets yesterday. I hope that politicians from all parties unite in denouncing his deeply dangerous and irresponsible rhetoric.”

When queried by the BBC on Sunday about whether a tech billionaire was trying to provoke violence, Business Secretary Peter Kyle commented:

Grok is a creation of Musk’s AI company Xai and is accessible to users on Musk’s social media platform, X. Users can pose questions on X by tagging “@grok”, prompting the chatbot to respond.

Skip past newsletter promotions

Previously, Grok mentioned South Africa’s “white genocide” in unrelated discussions.

This idea stems from a far-right conspiracy theory, which has gained traction in mainstream discourse, with figures like Musk and Tucker Carlson often referenced.

Musk is a prominent supporter of Robinson and has significantly contributed to reviving the narrative regarding gangs that groomed and assaulted girls in the UK for years. Last year, Downing Street rebuked Musk for his comments on X, where he posted that “civil wars are inevitable” alongside footage of violent riots in Liverpool.

X was contacted for a statement regarding Grok’s misleading information related to Saturday’s footage.

Quick Guide

Please contact us about this story






show


The best public interest journalism relies on firsthand accounts from knowledgeable individuals.

If you have any information to share about this topic, please contact us confidentially via the following methods:

Secure Messaging in the Guardian App

The Guardian app features a tool to submit story tips, with messages encrypted end-to-end and integrated into the regular activities of Guardian mobile applications, ensuring your communications remain private.

If you don’t have the Guardian app, please download it (iOS/Android) and navigate to the menu. Select Secure Messaging.

SecureDrop, Instant Messengers, Email, Telephone and Post

If you can safely use the TOR network without detection, you may send messages and documents to the Guardian via our SecureDrop platform.

Lastly, the guide at theguardian.com/tips provides various secure contact methods, outlining the advantages and disadvantages of each.


Illustration: Guardian Design / Rich Cousins

Thank you for your feedback.


Source: www.theguardian.com

Expert Rejection: Police Assert Research Backing Unbiased Live Facial Recognition Usage

The Metropolitan Police assert that their application of live facial recognition is devoid of bias, as echoed by a prominent technology specialist, but this claim has not been substantiated by the reports they reference in support of their litigation.

The MET plans to deploy the LFR in its most notable event this bank holiday weekend at the Notting Hill Carnival in West London.

According to The Guardian, the technology will be utilized at two locations leading up to the carnival, and the military has insisted on its implementation, despite the fact that LFR use is considered illegal, as declared by the Equality and Human Rights Commission.


This new assertion comes from Professor Pete Hussy, who led the only independent academic review of the police’s use of facial recognition; he is a former reviewer of Met’s LFR since 2018-19 and currently advises various law enforcement agencies in the UK and internationally on its application.

The Met contends that it has reformed the usage of LFR, as indicated in the 2023 research commissioned by the National Institute of Physics (NPL), claiming that it is now virtually free from bias. Nevertheless, Fussey responded:

“The sensitivity of the system can be adjusted for LFR’s operation. Higher sensitivity results in detecting more individuals, but such potential bias is influenced by race, gender, and age. Setting zero is the most sensitive while one is the least.”

The NPL report identified bias at a sensitivity level of 0.56, noting seven instances where individuals tested were mistakenly flagged as suspects, all of whom were from ethnic minority backgrounds.

These findings stemmed from a collection of 178,000 images entered into the system, with 400 volunteers passing by the cameras roughly 10 times, providing 4,000 opportunities for accurate recognition. They were included in an estimated crowd of over 130,000 at four locations in London and one in Cardiff. The tests were carried out in clear weather over 34.5 hours, though Fussey remarked this was shorter than tests conducted in some other countries where LFR is valued.

From this dataset, the report concluded that no statistically significant bias existed in settings above 0.6. This assertion has been reiterated by the MET to justify their ongoing use and expansion of LFR.

Hussey criticized this as insufficient to substantiate the MET’s claims, stating: “Councillors at the Metropolitan Police Service consistently argue their systems undergo independent testing for bias. An examination of this study revealed that the data was inadequate to support the claims made.”

“The definitive conclusions publicly proclaimed by MET rely on an analysis of merely seven false matches from a system scrutinizing the faces of millions of Londoners. Drawing broad conclusions from such a limited sample is statistically weak.”

Currently, the MET operates LFR at a sensitivity setting of 0.64, though they assert that the NPL studies did not yield erroneous matches.

Fussey stated: “Their own research indicates that false matches are not evaluated in settings claiming no bias that exceed 0.64.”

“Few in the scientific community suggest sufficient evidence exists to support these claims drawn from such a limited sample.”

Fussey added: “We clearly indicate that bias exists within the algorithm, but we assert that this can be mitigated through appropriate adjustments to the system settings. The challenge arises from the fact that the system has not been thoroughly tested under these varied settings.”

Lindsay Chiswick, the MET’s intelligence director, dismissed Hussy’s allegations, stating: “This is a factual report from a globally renowned institution. The Met Police’s commentary is grounded in the findings of an independent study,” she explained.

Skip past newsletter promotions

“If you utilize LFR with a setting of 0.64, as I currently am, there is no statistically significant bias.”

“We sought research to pinpoint where potential bias lies within the algorithm and employed the results to mitigate that risk.”

“The findings exemplify the degree to which algorithms can be used to minimize bias, and we consistently operate well above that threshold.”

During the Notting Hill carnival this weekend, warning signs will notify attendees about the use of LFR. The LFR system will be stationed next to the van containing the cameras linked to the suspect database.

Authorities believe utilizing the technology at two sites leading to the carnival will act as a deterrent. At the carnival itself, law enforcement is prepared to employ retrospective facial recognition to identify perpetrators of violence and assaults.

Fussey remarked: “Few question the police’s right to deploy technology for public safety, but oversight is crucial, and it must align with human rights standards.”

The MET claims that since 2024, the LFR has recorded a false-positive rate of one in every 33,000 cases. Although the exact number of scanned faces remains undisclosed, it is believed to be in the hundreds of thousands.

There have been 26 incorrect matches in 2024, with eight reported so far in 2025. The Met stated that these individuals were not apprehended as decisions on arrests rested with police officers, following matches produced by their computer systems.

Prior to the carnival, the MET arrested 100 individuals, recalled 21 to prison, and banned 266 from attendance. Additionally, they reported seizing 11 firearms and over 40 knives.

Source: www.theguardian.com

Operation Darkphone: Text-Based Murder – The Incredible Tale of How Police Infiltrated Gangs Like a Wiretap

POriswork often resembles neither a shield nor a duty; it’s primarily focused on documentation, online training, and educating individuals about driver criminal courses. Yet sometimes, reality echoes artistry. In 2020, the International Police infiltrated Encrochat, an encrypted phone network utilized by organized crime groups globally. They had a staggering 74 days of access to all communications, images, and plans involving drug trafficking, money laundering, scams, and homicide. “It was like LinkedIn for organized crime,” remarks Matt Horn, principal commander of the UK’s National Crime Agency (not an actor from Gavin & Stacey).

Operation Dark Phone: Murder by Text (Sunday, 9pm, Channel 4) presents a documentary drama that artfully centers around these messages, providing a gripping insight into how criminal enterprises function. Here, “sweets” refer to bullets, while “pineapple” signifies a homemade projectile. A violent British criminal, known for lying low in Spain, orchestrates a corrosive attack on a rival, even while sharing images of his breakfast: sliced cucumber with paprika—quite the culinary juxtaposition. The advised trick is to ensure the victim can’t reach the sink, allowing the acid to do its grim work. Not so appetizing.

The show is steeped in remarkably dark humor, largely courtesy of usernames like “Click” on an anonymous platform. Names like “Mystical Steaks,” “Worthy Bridges,” and “Top Shags” contain the absurdity akin to Chris Morris’s work. At one point, an agent describes interactions with the user “Livelong” and “Ball-Sniffer,” assuming the latter’s lowly status. Agents in their respective fields navigate through a thrilling narrative. Typically, they handle fewer than 100 explicit life threats in a year, yet during this operation, they intercepted over 150 in just six weeks. Logistically, that poses a challenge.

Detectives had access to criminal messages for 74 days. Photo: Channel 4

The show excels in captivating its audience. The narrative arc introduces well-developed characters and builds tension towards a crescendo. “Ace-Prospect” is seen importing firearms into the UK, while “Livelong” seeks revenge against him. Neither side, connected through intermediaries, knows the identity of the opposing party. The NCA faces a time crunch, often receiving message data a full day late, leading to a relentless race against time. A dilemma arises when an Ace-Prospect hitman mistakenly delivers a “pineapple” to a rival’s garden without it detonating—how do they safeguard the lives of nearby children while upholding their covert mission?

This narrative is far more enticing than traditional Crimewatch formats. Rather than petty criminals, it presents affluent players orchestrating offenses from afar. Is it ethical? Is there a risk of glamorizing crime? The visual portrayal evokes leisure, showcasing luxurious pools, gym-toned physiques, and cinematic weapons. The actor portraying Livelong bears a striking resemblance to Claes Bang and often appears shirtless. Nevertheless, beneath the surface, it’s a moral tale. The text echoes horrifying fantasies: “I’ll take his eyes out and chase him around all the prisons,” reminiscent of an acid-infused nightmare.

Gang members contributed to their own downfall with constant oversharing, boasting, and vanity. Photo: Channel 4

The allure lies not just in the medium but in the underlying message. The downfall of these criminals stems from superficial behavior, incessant sharing, and physical vanity driven by social media pride. Livelong’s identity is ultimately exposed when he posts a triumphant selfie. Just imagine an old-school criminal’s disbelief at this premise; I envision them slapping their foreheads, only to forget to release their fists and knock themselves out.

The criticism leveled at the series arises from the realization that this isn’t mere dramatization—it’s grounded in reality. Part of the critique stems from fear—a reminder to us that there exist individuals who trivialize their lives and revel in violence. Operation Dark Phone is a four-part documentary series providing a harrowing glimpse into police operations, promising even more astounding revelations as the story unfolds. If your faith in humanity feels shaken, you might want to skip this one. Just in case, you might want to avoid supermarket pineapples too.

Source: www.theguardian.com

Hong Kong Police Caution Against Downloading “Escapeist” Mobile Game | Mobile Games

Hong Kong authorities have issued a warning regarding mobile games created in Taiwan, labeling them as “separatist” and potentially leading to legal repercussions.

The game, Inverted Front: Bon Fire, allows players to “swear allegiance” to various groups associated with significant issues or targets in China, including Taiwan, Hong Kong, Tibet, Uyghur, Kazakhs, and Manchuria, with aims to “overthrow the communist regime,” referred to as the “People’s Republic.”

While some elements of the game’s narrative and place names are fictional, the website claims that it is a “non-fiction work” and that any resemblance to the PRC’s actual institutions, policies, or ethnic groups is “intentional.”

Players can also opt to “leave the Communists and defeat all enemies,” which has elicited strong reactions from authorities, including the Communist Party of China (CCP).

On Tuesday, Hong Kong police remarked that the inverted front “defines an armed revolution and promotes independence between Taiwan and Hong Kong,” criticizing the game.

Downloading the game may lead to accusations of possessing inflammatory materials, and in-app purchases could be construed as financially supporting a developer “for activities of secession or subversion,” the police noted.

Recommendations for the game could be seen as an “incitement to abdication.”

In this inverted worldview, the communists are portrayed as conquerors of surrounding regions, ruling with unprecedented cruelty as a colonial force, causing many to flee. Decades later, only Taiwan is depicted as “dodging lasting deterioration.”

The game prompts players to consider whether Taiwan can remain safe by avoiding provocations or whether “we should learn from the mistakes of the past 30 years that allowed today’s communists to grow into giants.”

In player descriptions, the game characterizes the communists as “heavy, reckless, and incompetent,” accusing them of “corruption, embezzlement, exploitation, genocide, and pollution.”

On its Facebook page, the developer, known as ESC Taiwan or Taiwan’s Overseas Strategic Communication Working Group (ESC), stated that it gained attention. On Wednesday, the game claimed it topped download charts in Hong Kong’s app store after a surge on Tuesday night.

“We recommend that users change the country or region of their Apple ID to successfully download the game.”

The developers have committed to not actively filtering or reviewing words or phrases in the game, addressing recent concerns about censorship in Chinese-created or related games. The location of ESC Taiwan remains undisclosed.

Police warnings regarding this game are part of a broader crackdown on democratic dissent in Hong Kong, where the CCP has tightened its grip on the city. In 2020, Beijing implemented national security laws in Hong Kong, with the city government’s approval, criminalizing widespread dissent.

Critics accuse the authorities of weaponizing these laws to target opposition voices, including activists, politicians, labor unions, journalists, media, and children’s literature.

Additional research by Jason Tzu Kuan Lu

Source: www.theguardian.com

Police Expansion of Live Facial Recognition Cameras: A Shift Towards ‘General’ Surveillance

Authorities anticipate that live facial recognition cameras may soon be “prevalent” across England and Wales, as indicated by internal documents revealing nearly 5 million face scans conducted last year.

A joint investigation by the Guardian and Liberty investigates showcases the rapid integration of this technology into UK law enforcement practices.

The government is simplifying police access to a wide range of image repositories, including passports and immigration databases, for past facial recognition searches, alongside significant financial investments in new hardware.

Live facial recognition entails real-time identification of faces captured by surveillance cameras, compared against a police watch list.

Conversely, retrospective facial recognition software allows police to match archived images from databases with those recorded on CCTV or similar systems.

The implementation of this technology is believed to be widespread in urban areas and transportation hubs across England and Wales, as noted in funding documents produced by South Wales Police and shared by the Metropolitan Police under the Freedom of Information Act.

The inaugural fixed live facial recognition camera is set to be trialed this summer in Croydon, located south of London.

This expansion comes despite the absence of any mention of facial recognition in the relevant congressional legislation.

Critics contend that police are permitted to “self-regulate” this technology, while there have been instances where previous algorithms disproportionately misidentified individuals from Black communities.

Following a 2020 Court of Appeals ruling that deemed South Wales Police’s live facial recognition practices unlawful, the Police College issued guidance emphasizing that “thresholds must be carefully set to enhance the likelihood of accurate alerts while keeping false alert rates within acceptable limits.”

There remains no statutory framework directing the standards or technology applied in this context.

Earlier this month, Police Minister Diane Johnson informed Congress that “we must evaluate whether a tailored legislative framework is necessary to govern the deployment of live facial recognition technology for law enforcement,” but further details from the Home Office are still pending.

Facial recognition cameras have been tested in London and South Wales since 2016; however, the pace at which police have adopted this technology has surged over the past year.

A survey conducted by the Guardian and Liberty revealed:

  • Last year, police scanned nearly 4.7 million faces using live facial recognition cameras—over double the figures from 2023. Data indicates that a minimum of 256 live recognition vans were operational in 2024.

  • Mobile units comprising 10 live facial recognition vans can be dispatched anywhere in the UK within a matter of days to bolster national capabilities, with eight police forces having deployed this technology, while the Met has four such vans.

  • Authorities are exploring a fixed infrastructure to establish a “safety zone” by deploying a network of live facial recognition cameras throughout London’s West End. Met officials indicated that this remains a viable option.

  • The force has nearly doubled the number of retrospective facial recognition searches on the National Police Database (PND) from 138,720 in 2023 to 252,798. The PND contains administrative mug shots, including many held unlawfully for individuals not formally charged or convicted of any offenses.

  • Over the past two years, more than 1,000 facial recognition searches have utilized the UK passport database, with officers increasingly accessing 110 matches from the Home Office immigration database last year. Authorities concluded that using a passport database for facial recognition “presents no risk.”

  • The Home Office is collaborating with the police to develop a new national facial recognition system termed strategic facial matchers, which will enable searches across various databases, including custody images and immigration records.

Lindsey Chiswick, Met’s Intelligence Director General and the National Police Chiefs’ Council’s Facial Recognition lead, stated that five out of five London residents support the police’s utilization of advanced technologies like facial recognition cameras, based on the survey findings.

Recently, registered sex offender David Chenelle, a 73-year-old from Lewisham, was sentenced to two years after being caught alone with a 6-year-old girl through live facial recognition technology. He had previously served nine years for 21 offenses involving children.

Skip past newsletter promotions

In 2024, the Met arrested 587 individuals, with 424 of those arrests backed by live facial recognition technology, leading to formal charges.

Among those arrested, 58 registered sex offenders faced serious violations of their conditions, with 38 subsequently charged.

Chiswick noted: “Given the limited resources and time available, the demand is high, and we see criminals exploiting technology on an expansive scale.

“There’s a chance for law enforcement to evolve. Discussions about leveraging AI are abundant, but we must embrace the opportunities presented by technology and data.”

Chiswick emphasized that the Met’s approach is to “proceed cautiously and evaluate at each phase,” while noting that “there may be advantages to some form of framework or statutory guidance.”

The MET employs facial recognition cameras in contexts aimed at ensuring statistical significance regarding gender or ethnic bias in misidentification instances.

Chiswick remarked: “I refuse to utilize biased algorithms in London. Each instance carries weight. The government raises concerns: Is there no issue regarding artificial intelligence?

“When selecting an algorithm’s purchaser, determining the training data employed, and assessing the origin of the technology, testing it thoroughly is paramount; you are obliged to operate within a specific context.”

The Ministry of Home Affairs did not provide a comment upon request.

Source: www.theguardian.com

Italian Police Enhance Security Measures at Tesla Dealerships Following Destruction of 17 Cars in Rome Fire

The Italian Ministry of Interior has instructed police across the country to step up security at Tesla dealerships following a fire in Rome that destroyed 17 electric vehicles manufactured by Elon Musk’s company.

The Digos, an anti-terrorism force within the Italian state police, is investigating whether anarchists were behind the fire at a Tesla dealership in Torre Angela, a suburb of Rome.

Firefighters spent hours extinguishing the flames early Monday. Drone footage showed a line of charred vehicles in the dealership’s parking lot. Musk referred to the incident as “terrorism” on his social media platforms.

Italy is home to 13 Tesla dealerships managed by the parent company, with most located in cities like Rome, Florence, and Milan.

A source within the interior ministry indicated that they are alerting authorities to the possibility of anti-Tesla protests amidst a global trend of vandalism in response to Musk’s political involvement in the US. Surveillance at dealerships will be increased as needed.

Skip past newsletter promotions

Since Donald Trump’s presidency began, Musk has reduced government employees as part of his “government efficiency” initiative, leading to the establishment of the “Tesla Takedown” boycott movement that started in the US and spread to Europe.

While most protests have been peaceful so far, Tesla dealerships and vehicles are increasingly becoming targets of vandalism. In Germany, seven vehicles were vandalized at dealerships in Ottersburg, and in Sweden, two Tesla stores—one in Stockholm and another in Malmö—were destroyed with orange paint.

Musk has fostered ties with far-right leaders in Europe, such as Italian Prime Minister Giorgia Meloni, who praised him as “a great man.” Matteo Salvini, leader of the far-right league in Italy, expressed solidarity with Musk following the incident in Rome.

“There is unwarranted animosity towards Tesla,” Salvini stated.

Source: www.theguardian.com

Experts warn that Meta police policy changes will cause conflict between EU and UK

Experts and politicians are warning that significant changes to Meta’s social media platform are setting it on a collision course with lawmakers in the UK and the European Union.

Lawmakers in Brussels and London have criticized Mark Zuckerberg’s decision to remove fact-checkers from Facebook, Instagram, and Threads in the US, with one MP describing it as “absolutely frightening.”

Changes to Meta’s global policy on hateful content now allow users to refer to transgender people as “it,” and the guidelines state that “no mental illness or abnormality based on gender or sexual orientation shall be permitted.”

Chi Onwula, a Labor MP and chair of the House of Commons science and technology committee, has expressed alarm at Zuckerberg’s decision to eliminate professional fact-checkers, calling it “alarming” and “pretty scary.”

Maria Ressa, a Nobel Peace Prize-winning American-Filipino journalist, has warned of “very dangerous times” ahead for journalism, democracy, and social media users due to Meta’s changes.

Damian Collins, the former UK technology secretary, has raised concerns about potential trade negotiations by the Trump administration that could pressure the UK to accept US digital regulatory standards.

Mehta’s move, revealed as a response to Donald Trump’s inauguration, has sparked predictions of challenges from the Trump administration on laws like the Online Safety Act.

Zuckerberg has hinted at extending his policy of removing fact-checkers beyond the US, raising concerns among experts and lawmakers in the UK and EU.

Regulatory scrutiny on Meta’s changes is expected to increase in the UK and EU, with concerns about the spread of misinformation and potential violations of digital services law.

Mehta has assured that content related to suicide, self-harm, and eating disorders will continue to be considered high-severity violations, but concerns remain about the impact on children in the UK.

Source: www.theguardian.com

UK police chief says that young people driven to violence by selecting and mixing fear online

The leader of counter-terrorism in Britain has expressed concern that more young people, including children as young as 10, are being lured towards violence through the “mix of fear” they encounter on the internet.

Vicky Evans, the deputy commissioner of the Metropolitan Police and senior national co-ordinator for counter-terrorism, noted a shift in radicalization, stating, “There has been a significant increase in interest in extremist content that we are identifying through our crime monitoring activities.”

Evans highlighted the disturbing trend of suspects seeking out material that either lacks ideology or glorifies violence from various sources. She emphasized the shocking and alarming nature of the content encountered by law enforcement in their investigations.

The search history reveals a disturbing fascination with violence, misogyny, gore, extremism, racism, and other harmful ideologies, as well as a curated selection of frightening content.

Detectives from the Counter-Terrorism Police Network are dedicating significant resources to digital forensics to apprehend young individuals consuming extremist material, a troubling trend according to Evans.

The government introduced measures to reform the Prevent system, aimed at deterring individuals from turning to terrorism. They are also reassessing the criteria for participation in Prevent to address individuals showing interest in violence without a clear ideological motive.

Evans emphasized the persistent terrorist threat in the UK, particularly in “deep, dark hotspots” that require urgent attention. Despite efforts to prevent terrorism, the UK has experienced several attacks in recent years.

Skip past newsletter promotions

There have been 43 thwarted terrorist plots since 2017, with concerns over potential mass casualty attacks. The counter-terrorism community is also monitoring the situation in Syria for any potential threats from individuals entering or leaving the country.

Source: www.theguardian.com

UK police boss warns that AI is on the rise in sextortion, fraud, and child abuse cases

A senior police official has issued a warning that pedophiles, fraudsters, hackers, and criminals are now utilizing artificial intelligence (AI) to target victims in increasingly harmful ways.

According to Alex Murray, the National Police’s head of AI, criminals are taking advantage of the expanding accessibility of AI technology, necessitating swift action by law enforcement to combat these new threats.

Murray stated, “Throughout the history of policing, criminals have shown ingenuity and will leverage any available resource to commit crimes. They are now using AI to facilitate criminal activities.”

He further emphasized that AI is being used for criminal activities on both a global organized crime level and on an individual level, demonstrating the versatility of this technology in facilitating crime.

During the recent National Police Chiefs’ Council meeting in London, Mr. Murray highlighted a new AI-driven fraud scheme where deepfake technology was utilized to impersonate company executives and deceive colleagues into transferring significant sums of money.

Instances of similar fraudulent activities have been reported globally, with concern growing over the increasing sophistication of AI-enabled crimes.

The use of AI by criminals extends beyond fraud, with pedophiles using generative AI to produce illicit images and videos depicting child sexual abuse, a distressing trend that law enforcement agencies are working diligently to combat.

Additionally, hackers are employing AI to identify vulnerabilities in digital systems, providing insights for cyberattacks, highlighting the wide range of potential threats posed by the criminal use of AI technology.

Furthermore, concerns have been raised regarding the radicalization potential of AI-powered chatbots, with evidence suggesting that these bots could be used to encourage individuals to engage in criminal activities including terrorism.

As AI technologies continue to advance and become more accessible, law enforcement agencies must adapt rapidly to confront the evolving landscape of AI-enabled crimes and prevent a surge in criminal activities using AI by the year 2029.

Source: www.theguardian.com

China’s Internet Police: From Bloggers to Targeting Followers

In late last year, Duan*, a Chinese university student, bypassed China’s Great Firewall using a virtual private network to access the social media platform Discord.

He discovered a community within Discord where members discussed political ideologies like democracy, anarchism, and communism. Popular blogger Yang Minghao highlighted the importance of these discussions in a YouTube video.

Duan was drawn to this community after watching Yang’s videos. However, he and several others from the group were later interrogated by police in a different city.

The interrogation focused on Duan’s connection with Yang, his use of a VPN, and his Discord comments. Duan was released after 24 hours, but concerns remain for Yang, who has been silent online since then.

This incident reflects China’s strict censorship policies, where online comments can lead to serious consequences.


At an online conference in China, people stand in front of a screen showing a message from Chinese President Xi Jinping. Photo: Alex Prabevski/EPA

The situation highlights the expanding web of online surveillance in China. Authorities are cracking down on dissenting voices, even those operating outside the country.

The web of online surveillance is widening

Li Ying, a prominent social media figure, warned his followers in China about police interrogations, urging them to unfollow him to avoid trouble.

The crackdown on online dissent indicates a growing trend of repression, with even overseas influencers facing pressure from Chinese authorities.

Online censorship campaigns have become routine in China, targeting those who express opinions contrary to the government’s narrative.

Despite the challenges, activists and dissenters continue to resist censorship and uphold their beliefs, fostering common values across borders.

The Discord crackdown has sparked discussions in online forums, underscoring the ongoing struggle for freedom of expression in China.

*Name has been changed.

Source: www.theguardian.com

Federal police union advocates for creation of portal for reporting AI deepfake victimization

The federal police union is calling for the establishment of a dedicated portal where victims of AI deepfakes can report incidents to the police. They expressed concern over the pressure on police to quickly prosecute the first person charged last year for distributing deepfake images of women.

Attorney General Mark Dreyfus introduced legislation in June to criminalize the sharing of sexually explicit images created using artificial intelligence without consent. The Australian Federal Police Association (Afpa) supports this bill, citing challenges in enforcing current laws.

Afpa highlighted a specific case where a man was arrested for distributing deepfake images to schools and sports associations in Brisbane. They emphasized the complexities of investigating deepfakes, as identifying perpetrators and victims can be challenging.

Afpa raised concerns about the limitations of pursuing civil action against deepfake creators, citing the high costs and challenges in identifying the individuals responsible for distributing the images.

They also noted the difficulty in determining the origins of deepfake images and emphasized the need for law enforcement to have better resources and legislation to address this issue.

Skip Newsletter Promotions

The federal police union emphasized the need for better resources and legislation to address the challenges posed by deepfake technology, urging for an overhaul of reporting mechanisms and an educational campaign to raise awareness about this issue.

The committee is set to convene its first hearing on the proposed legislation in the coming week.

Source: www.theguardian.com

Students Implicated in Cyber Fraud After Police Discover Involvement in Massive Phishing Site

Police have uncovered a disturbing trend among university students, who are resorting to cyber fraud to boost their income. They have managed to infiltrate a large phishing site on the dark web that has defrauded tens of thousands of individuals.

The site, known as LabHost, has been operational since 2021 and serves as a hub for cyber fraud, enabling users to create realistic-looking websites mimicking reputable companies like major banks. It has ensnared 70,000 users globally, including 70,000 individuals in the UK.

Victims unknowingly provided sensitive information, which was then used to siphon money from their accounts. The perpetrators behind the site profited by selling this stolen data on the dark web to other fraudsters.

According to the Metropolitan Police, the primary victims fall within the 25-44 age bracket, with a significant portion of their activities carried out online.

Law enforcement authorities have apprehended one of the alleged masterminds behind the site, along with 36 other suspects detained in the UK and abroad. The arrests were made at various airports in Manchester, Luton, Essex, and London.

British police are facing mounting pressure to demonstrate their effectiveness in combating the rising tide of cyber fraud.

Despite the relatively small impact of dismantling this particular site, the police intend to dismantle additional cyber fraud operations to undermine the confidence of criminals who believe they can act with impunity.

While fraud and cybercrime present considerable challenges for law enforcement agencies, they often compete for resources with other policing priorities, such as safeguarding children and enhancing women’s safety.

LabHost managed to amass significant amounts of sensitive data, including 480,000 debit or credit card numbers and 64,000 PIN numbers, generating over £1 million in membership fees from 2,000 individuals who paid in cryptocurrency.

The company lured users with tutorial videos on committing crimes using the site and on utilizing new consumer products. It promised quick installation of software in five minutes and offered “customer service” in case of any issues.

DI Oliver Richter noted the shift in cyber fraud from requiring technical skills like coding to now being accessible to individuals ranging from late teens to late 20s, many of whom are college students.

He expressed concern that these users may not fully grasp the risks and consequences of their actions, assuming anonymity and ease of operation.

Following the dismantling of the site, 800 users received warnings that the police were aware of their activities.

Detective Inspector Helen Rance, head of the Metropolitan Police’s cybercrime unit, described the LabHost bust as a sophisticated operation targeting those who have commercialized fraudulent activities. She highlighted collaboration with 17 factions globally, both in the public and private sectors.

She emphasized the success of penetrating the service, identifying the perpetrators, and understanding the scale of their illicit operations.

Source: www.theguardian.com

Victoria Police asked to investigate HyperVerse information in 2020, but referred the case back to Asic 22 months later.

Australia's corporate watchdog, the Australian Securities and Investments Commission (ASIC), referred information about a US$1.89 billion “pyramid scheme” known as Hyperverse to Victoria Police in 2020. But no action was taken, and the watchdog referred it again almost two years later.

The ASIC referred the company to Victoria Police for “possible criminal fraud” after concerns were raised with corporate regulators about its affiliate company Blockchain Global. The HyperVerse crypto investment scheme was operated by HyperTech Group, founded by two of Blockchain Global's directors, Sam Lee and Ryan Xu.

An ASIC spokesperson said, “Asic provided information relating to the HyperVerse matter to Victoria Police in 2020 after being informed that VicPol was investigating the HyperVerse matter. [alleged] and after determining that it was not a financial product and that the police were in the best position to investigate. [alleged] There is a possibility of criminal fraud.”

Neither ASIC nor Victoria Police provided further details about the alleged act.

“ASIC takes seriously any fraudulent activity that harms investors and we have the authority to act against fraudulent activity in relation to financial products and services,” the spokesperson said. “When we become aware of conduct that is outside of our jurisdiction, we seek to refer information about that conduct to the appropriate authorities.”

However, Victoria Police said it had assessed that information and decided after almost two years that ASIC was “best placed to investigate further”.

Meanwhile, Blockchain Global went bankrupt and owed creditors $58 million, while Mr. Xu and Mr. Lee were allegedly involved in a “global multi-level marketing and marketing of crypto-assets” as per the U.S. Securities and Exchange Commission. Mr. Xu is not named in the SEC's lawsuit.

A Victoria Police spokesperson confirmed it received a referral from ASIC in April 2020, but the matter was not assessed until 2021. After that assessment, “it was decided that the lead agency should be ASIC”.

The matter was transferred back to ASIC in January 2022. Asked why the process took 22 months, a Victoria Police spokesperson said: “For matters of this nature, the first step is to determine whether a criminal offense has been committed and whether it is best to approach Victoria Police. Depending on the situation, it may take some time.”

A spokesperson declined to comment on the content of the evaluation.

Mr. Ashiq said he believes he is acting on this referral. “ASIC understands that this matter is being actively considered by VicPol. Ultimately, VicPol is best placed to explain its decision to refer this matter back to ASIC,” the spokesperson said.

“At the time VicPol referred the matter back to ASIC, an external administrator had been appointed to Blockchain Global. ASIC is currently considering the information contained in the liquidator’s report relating to this scheme.”

At the time ASIC was referred to Victoria Police, the first Hyper scheme, ‘HyperCapital’, was underway and launched in Hong Kong in 2019. Meanwhile, HyperCapital was rebranded to HyperFund in 2020 and became HyperVerse in December 2021.

Mr. Lee denied claims that the scheme was a fraud and defended his role at HyperVerse as limited to the technical and financial management aspects of the business. Members were offered memberships to HyperVerse, where they could explore the HyperVerse ecosystem. There were returns of 0.5% per day and a 300% return over 600 days. HyperUnits were linked to various crypto tokens and could be withdrawn and converted into other cryptocurrencies once matured.

Mr Lee also did not mention that he had resigned from Blockchain Global’s board of directors and that the company was no longer in business.

According to court documents, Brenda Chunga, a senior U.S. promoter charged and pleaded guilty to conspiracy to commit securities fraud and wire fraud, hired Hypertech Group and Blockchain Global to potentially promote the scheme. Mr. Chunga emphasized his connection with Blockchain Global to give the HyperFund project credibility and increase security of investment.

Ashiq defended his failure to issue a warning about the Hyperfund and Hyperverse investment schemes. Mr. Lee declined to answer questions from Guardian Australia, and Mr Hsu could not be reached for comment.

Source: www.theguardian.com

Lawyer Exposes: US Police Allegedly Prevented Access to Numerous Online Child Sexual Abuse Reports

The Guardian has revealed that social media companies relying on artificial intelligence software to manage their platforms are producing unworkable reports on child sexual abuse cases, leaving U.S. police unable to uncover potential leads, which is delaying the investigation into suspected looters.

By law, U.S.-based social media companies are required to report child sexual abuse content detected on their platforms to the National Center for Missing and Exploited Children (NCMEC), which serves as a national clearinghouse for child abuse information and forwards information to relevant law enforcement agencies in the United States and around the world. The company said it received more than 32 million reports of suspected child sexual exploitation and approximately 88 million images, videos, and other files from businesses and the general public in 2022.

Meta is the largest reporter of this information, with over 27 million (84%) generated by Facebook, Instagram, and WhatsApp platforms in 2022. NCMEC is partially funded by the Department of Justice and also receives private sources of corporate donations.

Social media companies, including Meta, use AI to detect and report suspicious content on their sites and employ human moderators to send some flagged content to law enforcement. However, U.S. law enforcement agencies can only disclose AI-generated child sexual abuse material (CSAM) by serving a search warrant on a company that has filed a report, which can add days or even weeks to the investigation process.

“If a company reports a file to NCMEC and does not indicate that it viewed the file before reporting, we will not be able to open the file,” said Staka Shehan, vice president of analytical services at NCMEC.

To protect your privacy under the Fourth Amendment, neither law enforcement officials nor the federally funded NCMEC will issue a search warrant unless the contents of the report are clear and first reviewed by a social media company representative.

NCMEC staff and law enforcement agencies cannot legally see the content of AI-generated content that is not seen by humans, which can stall investigations into suspected predators for several weeks, resulting in the loss of evidence that may be possible to connect.

“Any delay [in viewing the evidence] “The longer criminals go undetected, the more detrimental it is to ensuring community safety,” said an assistant U.S. attorney in California, who spoke on condition of anonymity. “They are dangerous to all children.”

In December, the New Mexico Attorney General’s Office filed a lawsuit against Meta, alleging that its social network has become a marketplace for child predators and that Meta has repeatedly failed to report illegal activity on its platform. woke up. In response, Meta said its priority was to combat child sexual abuse content.

The state attorney general laid the blame for the fight to send actionable information at the feet of Meta. “Reports showing the inefficiency of the company’s AI-generated cyber information systems prove what we said in the complaint,” Raul Torrez said in a statement to the Guardian.

To ensure the safety of children, keep parents informed, and enable law enforcement to effectively investigate and prosecute online sex crimes against children, the company is reforming, staffing levels, and policies. , it’s long past time to implement algorithmic changes,” Torrez added.

Despite legal limitations on moderation AI, social media companies are likely to increase its use in the near future. In 2023, OpenAI, developer of ChatGPT, announced they claimed that large-scale language models can do the job of human content moderators and have roughly the same accuracy.

However, child safety experts say that the AI software used by social media companies to moderate content already knows the digital fingerprints of images, known as hashes, and that the AI software used by social media companies to moderate content cannot be used to detect known cases of child sexual abuse. It claims to be effective only when identifying images of Lawyers interviewed said AI would be ineffective when newly created images or when known images or videos are altered.

“There is always concern about cases involving newly identified victims, and because they are new, the materials do not have a hash value,” said the director of the Zero Abuse Project, a nonprofit organization focused on combating child abuse.
said senior lawyer Kristina Korobov. . “If humans were doing the work, there would be more discoveries of newly discovered victims.”

In the US, please call or text us. child help Abuse Hotline 800-422-4453 or visit
their website If you need more resources, please report child abuse or DM us for help. For adult survivors of child abuse, support is available at the following link:
ascasupport.org. In the UK,
NSPCC Support for children is available on 0800 1111 and adults who are concerned about a child can call 0808 800 5000. National Association of Child Abuse (
napak) offers support to adult survivors on 0808 801 0331. In Australia, children, young people, parents and teachers can contact the Kids Helpline on 1800 55 1800.
brave hearts Adult survivors can contact 1800 272 831
blue knot foundation 1300 657 380. Additional sources of help can be found at:
Child Helpline International

Source: www.theguardian.com

The Top Podcast of the Week: Exploring the Metropolitan Police Department’s Largest Crime Bust

This week’s picks

football greats
Wide range of weekly episodes available
Was Ian Wright a better footballer than Alan Shearer? How do players communicate with foreign teams who only know the word “Bobby Charlton”? Geoff Stelling discusses these questions with guests including Paul Merson, Glenn Hoddle and Sir Geoff Hurst. In the first episode, Stelling reunites with Soccer Saturday partner Chris Kamara and relives many fond memories, including the origin of that iconic “I can’t believe it, Jeff!” Catchphrase. Hannah Verdier

Blindspot: Plague in the Shadows
Wide range of weekly episodes available
This podcast focuses on New York, where misinformation and misinformation were rife in the early days of the HIV epidemic. WNYC’s Kai Wright has been a reporter on the ground since 1996, and is not critical of how people in need are denied access to medical care. Dr. Anthony Fauci was among those interviewed, along with activists from the 1980s. HV

On January 6, 2021, supporters of Donald Trump stormed the U.S. Capitol. Photo: Mandel Gunn/AFP/Getty Images

capture the kingpin
BBC Sounds, weekly episodes
If you enjoy a podcast filled with drug dealing, corruption, and encrypted phone networks, then this six-part show about the Metropolitan Police’s biggest organized crime bust is for you. As host Mobeen Azhar puts it, the story becomes “increasingly shocking” as we uncover inside stories from the squad that infiltrated key figures in the criminal organization. HV

less is better
Episodes will be widely available weekly starting Sunday, January 14th
Is it better for your health to eat high-quality meat or eat less meat? This month, promoting vegan curiosity and positive health messages, Katie Revell and Olivia Oldham explore what it’s like to raise and slaughter animals, and how culture and education influence people’s preferences. Find out whether it is easy to give and buy good things. HV

January 6: America’s Story
Wide range of weekly episodes available
As we mark the third anniversary of the storming of the U.S. Capitol and with Donald Trump set to become the next Republican nominee, Our Body Politics on the people of color who helped lead the committee investigation on January 6th is an insightful series. They talk about their experiences, starting with why they chose to protect a country that doesn’t always protect them. Holly Richardson

There’s a podcast for that

Oprah Winfrey speaks on Oprah’s “2020 Vision: Your Life in Focus” tour. Photo: Steve Jennings/Getty Images

this week, Rachel Aroesti Our picks for the 5 best podcasts featuring true storyfrom a chronicle of LGBT heroes to the remarkable rise of Oprah Winfrey.

unusual life
Truth is always stranger than fiction, this fascinating series from the BBC World Service delicately unearths some of the most remarkable stories of human endeavor. I am amazed at the determination of Tariq Mehmood, one of the bradford 12, He was arrested as a young man for trying to protect himself from skinhead violence and became a novelist. In the drama “Prison His Break'' Jaibet uses his knowledge of Papua to escape from an inhumane immigration camp in New Guinea as he is overwhelmed by Nous of Elom. And just as amazed by the courage of Laura Dekker, who decided to travel around the world alone at the age of 13 (much to the surprise of the Dutch authorities).

making gay history
Journalist Eric Marcus established himself as a leading authority on 20th century gay life with his award-winning 1992 book Making History. In this moving podcast, he revisits his extensive archive of interviews to allow key figures in the LGBT rights movement to tell their own stories. Celebrities like early transgender activist Sylvia Rivera, playwright Larry Kramer, and television host Ellen DeGeneres, as well as lesser-known figures whose activism has made the world a safer place for queer people. I’ll listen to you.

CEO diary
Money can’t buy happiness. And just because you make millions doesn’t automatically mean you’re considered an inspirational person. But it’s also true that entrepreneur Stephen Bartlett’s hit interview podcast frequently serves as motivational rocket fuel. Since 2017, Bartlett has relentlessly questioned business leaders about their childhoods, work habits, and the philosophies they live by, unearthing practical, life-changing advice for his listeners. Since then, he has expanded his remit to include headline-grabbing celebrities including Davina McCall, Maisie Williams, Liam Payne, and Jesse Lingard.

hidden heroes of history
From energetic secret agent Virginia Hall and her epic prison escape to Surrey banker Eric Roberts’ hunt for Nazi sympathizers, this thrilling podcast narrated by Helena Bonham Carter explores perseverance from the second world. Relive some of the most amazing feats and stories of damnation. war. Along with stories of spies, we hear about the remarkable artistic resistance of Claude Cahan and Marcel Moore, and the pioneering feminism of Major Charity Adams, the first black officer to serve in the Women’s Army Auxiliary Corps.

Skip past newsletter promotions