Ornithologists at the Cornell Lab of Ornithology have unveiled the most comprehensive evolutionary tree of birds to date. This groundbreaking research reveals unexpected relationships and serves as a fascinating illustration for bird enthusiasts. Explore the World Bird Lineage Explorer, where you can track lineage and evolutionary milestones.
European bee-eater (Merops apiaster). Image credit: Rashuli / CC BY 2.0.
Understanding the phylogeny of birds is crucial for advancing bird research.
With over 11,000 bird species worldwide, consolidating phylogenetic trees into a singular, updated resource has posed significant challenges for ornithologists.
The Birds of the World Phylogeny Explorer directly addresses these challenges, remaining current with the latest scientific discoveries.
“This tool combines centuries of avian research with advanced computational tools, creating a captivating interactive resource that narrates the story of bird evolution,” stated Dr. Elliott Miller, a researcher with the American Bird Conservancy.
“New evolutionary relationships are constantly being discovered. We release annual updates to our phylogenetic tools, ensuring our datasets align with the latest taxonomy,” he added.
“This tool holds immense value for the scientific community,” remarked Dr. Pam Rasmussen from the Cornell Lab of Ornithology.
“The complete tree of bird life, built on cutting-edge phylogenetic research, is now a downloadable, interactive dataset from Birds of the World, encouraging further inquiry and exploration.”
“This evolutionary tree provides crucial insights into how evolutionary history has shaped traits such as beak shape, wing length, foraging behaviors, and habitat preferences in birds.”
“Bird lovers will appreciate the personalized features of the Birds of the World Phylogeny Explorer,” Dr. Marshall Iliff noted, also from the Cornell Lab of Ornithology.
“By logging into the platform, birders can visualize the diversity of their eBird species list, diving deep into bird history across orders, families, and genera, thus revealing noteworthy evolutionary patterns.”
“For birdwatchers, their lifetime list transforms into a personal journey through evolutionary history, highlighting how each species fits into the broader narrative of avian evolution.”
“Users are sure to encounter surprising revelations. For instance, why does the North American woodpecker closely resemble other woodpeckers yet belong to a different lineage?”
“Or why are peregrine falcons fierce hunters like hawks and eagles, even though they originate from a separate branch of the family tree?”
“Solving these taxonomic enigmas can become a lifelong pursuit for anyone deeply passionate about birds.”
Struggling to recall numerous passwords? If you can remember them all, you either have too few or are using the same one across multiple sites. By 2026, this challenge could become obsolete.
Passwords present significant cybersecurity challenges; hackers trade stolen credentials daily. A Verizon analysis reveals that only 3% of passwords are complex enough to resist hacking attempts.
Fortunately, an innovative solution is emerging, making data security simpler. Instead of cumbersome passwords, biometric authentication—such as facial recognition or fingerprint scanning—is increasingly being used for seamless logins.
“Passwordless authentication is becoming universal, providing robust security against phishing and brute force attacks,” says Jake Moore, an expert at cybersecurity firm ESET.
If you currently access your banking apps with your fingerprint, you’re already utilizing this cutting-edge method. It generates two cryptographic “passkeys”: a public key sent to your service (like your bank) during account creation and a private key securely stored on your device.
To log in, your bank sends a one-time cryptographic challenge to your device instead of requesting a password. Your fingerprint unlocks a secure chip that uses your private key to sign the challenge, sending the signed response back to your bank for verification against the public key. Importantly, your biometric data remains on your device. “Passkeys offer security, ease of use, and unparalleled convenience,” adds Moore.
Major companies are actively pushing passkey adoption. Microsoft announced in May 2025 that new accounts created with them will default to passwordless. “While passwords have been prevalent for centuries, their reign could soon come to an end,” the company stated. More organizations are expected to follow suit within the next year. Moore anticipates that as additional platforms embrace passkeys, more users will turn to biometric solutions that frequently scan their faces.
Various sectors are embracing passkey technology. Online gaming platform Roblox is rapidly expanding its use of passkeys, as shown by a 856% increase in authenticating users, with the public sector also participating; the German Federal Employment Agency ranks among the leading organizations adopting passkeys.
“Decreasing dependence on passwords benefits every organization,” affirms Andrew Schikier from the FIDO Alliance, which advocates for passkey integration. This transition also alleviates user concerns: data reveals that organizations switching to passkeys see an 81% drop in IT helpdesk requests regarding login issues. Schikier predicts that over half of the top 1,000 websites will adopt passkeys by 2026.
The evolving experience of young people on the internet.
Linda Raymond/Getty Images
In 2025, numerous countries will implement new internet access restrictions aimed at protecting children from harmful content, with more expected to follow in 2026. However, do these initiatives genuinely safeguard children, or do they merely inconvenience adults?
The UK’s Online Safety Act (OSA), which took effect on July 25, mandates that websites prevent children from accessing pornography or content that promotes self-harm, violence, or dangerous activities. While intended to protect, the law has faced backlash due to its broad definition of “harmful content,” which resulted in many small websites closing down as they struggled to meet the regulatory requirements.
In Australia, a new policy prohibits those under 16 from using social media, even with parental consent, as part of the Online Safety Amendment (Social Media Minimum Age) Act 2024. This legislation, effective immediately, grants regulators the authority to impose fines up to A$50 million on companies that fail to prevent minors from accessing their platforms. The European Union is considering similar bans. Meanwhile, France has instituted a law requiring age verification for websites with pornographic material, facing protests from adult website operators.
Concerns surrounding the use of technology for age verification are growing, with some sites utilizing facial recognition tools that can be tricked with screenshots of video game characters. Moreover, VPNs allow users to masquerade as being from regions without strict age verification requirements. Following the onset of the OSA, search attempts for VPNs have surged, with reports indicating as much as a 1800% increase in daily registrations following the law’s implementation. The most prominent adult site experienced a 77% decline in UK visitors in the aftermath of the OSA, as users changed their settings to appear as if they were located in countries where age verification isn’t enforced.
The Children’s Commissioner for England emphasized that these loopholes need to be addressed and has made recommendations for age verification measures to prevent children from using VPNs. Despite this, many argue that such responses address symptoms rather than the root of the problem. So, what is the appropriate course of action?
Andrew Coun, a former member of Meta and TikTok’s safety and moderation teams, opines that harmful content isn’t deliberately targeted at children. Instead, he argues that algorithms aim to maximize engagement, subsequently boosting ad revenue. This creates skepticism regarding the genuine willingness of tech companies to protect kids, as tighter restrictions could harm their profits.
“It’s exceedingly unlikely that they will prioritize compliance,” he remarked, noting the inherent conflict between their interests and public welfare. “Ultimately, profits are a primary concern, and they will likely fulfill only the minimum requirements to comply.”
Graham Murdoch, a researcher at Loughborough University, believes the surge in online safety regulations will likely yield disappointment, as policymaking typically lags behind the rapid advancements of technology firms. He advocates for the establishment of a national internet service complete with its own search engine and social platforms, guided by a public charter akin to that of the BBC.
“The Internet should be regarded as a public service because of the immense value it offers to everyday life,” Murdoch stated. “We stand at a pivotal moment; if decisive action isn’t taken soon, returning to our current trajectory will be impossible.”
The introduction to tech mogul Alex Karp’s interview on Sourcely, a YouTube show by the digital finance platform Brex, features a mix of him waving the American flag accompanied by a remix of AC/DC’s “Thunderstruck.” While strolling through the company’s offices, Karp avoided questions about Palantir’s contentious ties with ICE, focusing instead on the company’s strengths while playfully brandishing a sword and discussing how he re-buried his childhood dog Rosita’s remains near his current residence.
“It’s really lovely,” comments host Molly O’Shea as she engages with Karp.
For those wanting insights from key figures in the tech sector, platforms like Sourcery provide a refuge for an industry that’s increasingly cautious, if not openly antagonistic, towards critical media. Some new media initiatives are driven by the companies themselves, while others occupy niches favored by the tech billionaire cohort. In recent months, prominent figures like Mark Zuckerberg, Elon Musk, Sam Altman, and Satya Nadella have participated in lengthy, friendly interviews, with companies like Palantir and Andreessen Horowitz launching their own media ventures this year.
A significant portion of Americans harbor distrust towards big tech and believe artificial intelligence is detrimental to society. Silicon Valley is crafting its own alternative media landscape, where CEOs, founders, and investors take center stage. What began as a handful of enthusiastic podcasters has evolved into a comprehensive ecosystem of publications and shows, supported by some of the leading entities in tech.
Pro-tech influencers, such as podcast host Rex Fridman, have historically fostered close ties with figures like Elon Musk, yet some companies this year opted to eliminate intermediaries entirely. In September, venture capital firm Andreessen Horowitz introduced the a16z blog on Substack. Notable author Katherine Boyle highlighted her longstanding friendship with JD Vance. This podcast has surged to over 220,000 subscribers on YouTube, featuring OpenAI CEO Sam Altman last month. Andreessen Horowitz is a leading investor.
“What if the future of media is shaped not by algorithms or traditional bodies, but by independent voices directly interacting with audiences?” the company posited in its Substack announcement. Previously, it invested $50 million into digital media startup BuzzFeed with a similar ambition, which ultimately fell to penny stock levels.
The a16z Substack also revealed this month its new eight-week media fellowship aimed at “operators, creators, and storytellers shaping the future of media.” This initiative involves collaboration with a16z’s new media team, characterized as a collective of “online legends” aiming to furnish founders with the clout, flair, branding, expertise, and momentum essential for winning the online narrative.
In parallel to a16z’s media endeavors, Palantir launched a digital and print journal named Republic earlier this year, emulating the format of academic journals and think tank publications like Foreign Affairs. The journal is financially backed by the nonprofit Palantir Foundation for Defense Policy and International Affairs, headed by Karp, who reportedly contributes just 0.01 hours a week, as per his 2023 tax return.
“Too many individuals who shouldn’t have a voice are amplified, while those who ought to be heard are sidelined,” remarked Republic, which boasts an editorial team comprised of high-ranking Palantir executives.
Among the articles featured in Republic is a piece criticizing U.S. copyright restrictions for hindering AI leadership, alongside another by two Palantir employees reiterating Karp’s affirmation that Silicon Valley’s collaboration with the military benefits society at large.
Republic joins a burgeoning roster of pro-tech outlets like Arena Magazine, launched late last year by Austin-based venture capitalist Max Meyer. Arena’s motto nods to “The New Needs Friends” line from Disney’s Ratatouille.
“Arena avoids covering ‘The News.’ Instead, we spotlight The New,” reads the editor’s letter in the inaugural issue. “Our mission is to uplift those incrementally, or at times rapidly, bringing the future into the present.”
This sentiment echoes that of founders who have taken issue with publications like Wired and TechCrunch for their overly critical perspectives on the industry.
“Historically, magazines that covered this sector have become excessively negative. We plan to counter that by adopting a bold and optimistic viewpoint,” Meyer stated during an appearance on Joe Lonsdale’s podcast.
Certain facets of emerging media in the tech realm weren’t established as formal corporate media extensions but rather emerged organically, even while sharing a similarly positive tone. The TBPN video podcast, which interprets the intricacies of the tech world as high-stakes spectacles akin to the NFL Draft, has gained swift influence since its inception last year. Its self-aware yet protective atmosphere has drawn notable fans and guests, including Meta CEO Mark Zuckerberg, who conducted an in-person interview to promote Meta’s smart glasses.
Another podcaster, 24-year-old Dwarkesh Patel, has built a mini-media empire in recent years with extensive collaborative discussions featuring tech leaders and AI researchers. Earlier this month, Patel interviewed Microsoft CEO Satya Nadella and toured one of the company’s newest data facilities.
Among the various trends in the tech landscape, Elon Musk has been a pioneer in adopting this method of pro-tech media engagement. Following his acquisition of Twitter in 2022, the platform has restricted links to key news entities and established auto-responses with poop emojis for reporter inquiries. Musk conducts few interviews with mainstream media yet engages in extensive discussions with friendly hosts like Rex Fridman and Joe Rogan, facing minimal challenge to his viewpoints.
Musk’s inclination to cultivate a media bubble around himself illustrates how such content can foster a disconnect from reality and promote alternative facts. His long-standing criticism of Wikipedia spurred him to create Grokipedia, an AI replica generating blatant falsehoods and results aligning with his far-right perspective. Concurrently, Musk’s chatbot Grok has frequently echoed Musk’s opinions, even going to absurd lengths to flatter him, such as asserting last week that Musk is healthier than LeBron James and could defeat Mike Tyson in a boxing match.
The emergence of new technology-centric media is part of a broader transformation in how celebrities portray themselves and the access they grant journalists. The tech industry has a historical aversion to media scrutiny, a trend amplified by scandals like the Facebook Files, which unveiled internal documents and potential harms. Journalist Karen Hao exemplified the tech sector’s sensitivity to negative press, noting in her 2025 book “Empire of AI” that OpenAI refrained from engaging with her for three years after a critical article she wrote in 2019.
The strategy of tech firms establishing their own autonomous and resonant media mirrors the entertainment sector’s approach from several years back. Press tours for film and album promotions have historically been tightly monitored, with actors and musicians subjected to high-pressure interviews judged by shows like “Hot Ones.” Political figures are adopting a similar framework, granting them access to fresh audiences and a more secure environment for self-promotion, as showcased by President Donald Trump’s 2024 campaign engaging with podcasters like Theo Fung, and California Governor Gavin Newsom’s introduction of his own political podcast this year.
While much of this emerging media does not aim to unveil misconduct or confront the powerful, it still holds certain merits. The content produced by the tech sector often reflects the self-image of its elite and the world they aspire to create, within an industry characterized by minimal government oversight and fewer probing inquiries into operational practices. Even the simplest of questions offer insights into the minds of individuals who primarily inhabit secured boardrooms and gated environments.
“If you were a cupcake, what kind would you be?” O’Shea queried Karp about Brex’s sauces.
“I prefer not to be a cupcake, as I don’t want to be consumed,” Karp replied. “I resist being a cupcake.”
quick guide
Contact us about this story
show
The best public interest journalism relies on first-hand reporting from those in the know.
If you have something to share regarding this matter, please contact us confidentially using the methods below.
Secure messaging in the Guardian app
The Guardian app has a tool to submit story tips. Messages are end-to-end encrypted, concealed within the daily activities of all Guardian mobile apps, preventing observers from knowing that you are in communication with us.
If you don’t have the Guardian app, please download it (iOS / android) and go to the menu. Select “Secure Messaging.”
SecureDrop, instant messenger, email, phone, mail
If you can safely utilize the Tor network without being monitored, you can send messages and documents to the Guardian via our SecureDrop platform.
Lastly, our guide at theguardian.com/tips lists multiple ways to contact us securely, discussing the advantages and disadvantages of each.
Almost 10% of parents in the UK report that their children have faced online threats, which can include intimidation over intimate photos and the exposure of personal information.
The NSPCC, a child protection charity, indicated that while 20% of parents are aware of a child who has been a victim of online blackmail, 40% seldom or never discuss the issue with their children.
According to the National Crime Agency, over 110 reports of attempted child sextortion are filed monthly. In these cases, gangs manipulate teenagers into sharing intimate images and then resort to blackmail.
Authorities in the UK, US, and Australia have noted a surge in sextortion cases, particularly affecting teenage boys and young men, who are targeted by cybercrime groups from West Africa and Southeast Asia. Tragically, some cases have resulted in suicide, such as that of 16-year-old Murray Dawe from Dunblane, Scotland, who took his life in 2023 after being sextorted on Instagram, and 16-year-old Dinal de Alwis, who died in Sutton, south London, in October 2022 after being threatened over nude photographs.
The NSPCC released its findings based on a survey of over 2,500 parents, emphasizing that tech companies “fail to fulfill their responsibility to safeguard children.”
Rani Govender, policy manager at the NSPCC, stated: “Children deserve to be safe online, and this should be intrinsically woven into these platforms, not treated as an afterthought after harm has occurred.”
The NSPCC defines blackmail as threats to release intimate images or videos of a child, or any private information the victim wishes to keep confidential, including aspects like their sexuality. Such information may be obtained consensually, through coercion, manipulation, or even via artificial intelligence.
The perpetrators can be outsiders, such as sextortion gangs, or acquaintances like friends or classmates. Blackmailers might demand various things in exchange for not disclosing information, such as money, additional images, or maintaining a relationship.
The NSPCC explained that while extortion overlaps with sextortion, it encompasses a broader range of situations. “We opted for the term ‘blackmail’ in our research because it includes threats related to various personal matters children wish to keep private (e.g., sexual orientation, images without religious attire) along with various demands and threats, both sexual and non-sexual,” the charity noted.
The report also advised parents to refrain from “sharing,” which pertains to posting photos or personal information about their children online.
Experts recommend educating children about the risks of sextortion and being mindful of their online interactions. They also suggest creating regular opportunities for open discussions between children and adults, such as during family meals or car rides, to foster an environment where teens are comfortable disclosing if they face threats.
“Understanding how to discuss online threats in a manner appropriate to their age and fostering a safe space for children to come forward without fear of judgment can significantly impact their willingness to speak up,” Govender emphasized.
The NSPCC spoke with young individuals regarding their reluctance to share experiences of attempted blackmail with parents or guardians. Many cited feelings of embarrassment, a preference to discuss with friends first, or a belief that they could handle the situation on their own.
The European Parliament has proposed that children under the age of 16 should be prohibited from using social media unless their parents grant permission.
On Wednesday, MEPs overwhelmingly approved a resolution concerning age restrictions. While this resolution isn’t legally binding, the urgency for European legislation is increasing due to rising concerns about the mental health effects on children from unfettered internet access.
The European Commission, responsible for setting EU laws, is already exploring the option of a social media ban for those under 16 in Australia, anticipated to commence next month.
Commission Chair Ursula von der Leyen indicated in a September speech that she would closely observe the rollout of Australia’s initiative. She condemned “algorithms that exploit children’s vulnerabilities to foster addiction” and stated that parents often feel overwhelmed by “the flood of big tech entering our homes.”
Ms. von der Leyen pledged to establish an expert panel by the year’s end to provide guidance on effectively safeguarding children.
There’s increasing interest in limiting children’s access to social media and smartphones. A report commissioned by French President Emmanuel Macron last year recommended that children should not have smartphones until age 13 and should refrain from using social media platforms like TikTok, Instagram, and Snapchat until they turn 18.
Danish Social Democratic Party lawmaker Christel Schaldemose, who authored the resolution, stated that it’s essential for politicians to act in protecting children. “This is not solely a parental issue. Society must take responsibility to ensure that platforms are safe environments for minors, but only if they are above a specified age.”
Her report advocates for the automatic disabling of addictive elements like infinite scrolling, auto-playing videos, excessive notifications, and rewards for frequent use when minors access online platforms.
The resolution emphasizes that “addictive design features are typically integral to the business models of platforms, particularly social media.” An early draft of Schaldemose’s report referenced a study indicating that one in four children and young people exhibit “problematic” or “dysfunctional” smartphone use, resembling addictive behavior. It states that children should be 16 before accessing social media, although parents can consent from age 13.
The White House has urged the EU to retract its digital regulations, and supporters of the social media ban have contextualized their votes accordingly. U.S. Commerce Secretary Howard Lutnick mentioned at a meeting in Brussels that EU regulations concerning tech companies should be re-evaluated in exchange for reduced U.S. tariffs on steel and aluminum.
Stéphanie Yoncourtin, a French lawmaker from Macron’s party, responded to Lutnick’s visit, asserting that Europe is not a “regulatory colony.” After the vote, she remarked: “Our digital laws are not negotiable. We will not compromise child protections just because a foreign billionaire or tech giant attempts to influence us.”
The EU is already committed to shielding internet users from online dangers like misinformation, cyberbullying, and unlawful content through the Digital Services Act. However, the resolution highlights existing gaps in the law that need to be addressed to better protect children from online risks, such as addictive design features and financial incentives to become influencers.
Schaldemose acknowledged that the law, of which she is a co-author, is robust, “but we can enhance it further because we remain less specific and less defined, particularly in regards to addictive design features and harmful dark pattern practices.”
Dark patterns refer to design elements in apps and websites that manipulate user decisions, such as countdown timers pushing purchases or persistent requests to enable location tracking or notifications.
Schaldemose’s resolution was endorsed by 483 members, while 92 voted against it and 86 abstained.
Eurosceptic lawmakers criticized the initiative, arguing that it would overreach if the EU imposes a ban on children’s access to social media. “Decisions about children’s online access should be made as closely as possible to families in member states, not in Brussels,” stated Kosma Złotowski, a Polish member of the European Conservative and Reform Group.
The resolution was adopted just a week after the committee announced a delay in overhauling the Artificial Intelligence Act and other digital regulations that aim to relax rules for businesses under the guise of “simplification.”
Schaldemose acknowledged the importance of not overwhelming the legislative system, but added, “There is a collective will to do more regarding children’s protection in the EU.”
New guidelines have urged social media platforms to curtail internet “pile-ons” to better safeguard women and girls online.
Ofcom, Britain’s communications regulator, implemented guidance on Tuesday aimed at tackling misogynistic abuse, coercive control, and the non-consensual sharing of intimate images, with a focus on minimizing online harassment of women.
The measures imply that tech companies will limit the number of responses to posts on platforms like X, a strategy Ofcom believes will lessen incidents where individual users are inundated with abusive responses.
Additional measures proposed by Ofcom include utilizing databases of images to prevent the non-consensual sharing of intimate photos—often referred to as ‘revenge porn’.
The regulator advocates for “hash matching” technology that helps platforms remove disputed images. This system cross-references user-reported images or videos with a database of illegal content, transforming them into “hashes” or digital identifiers, enabling the identification and removal of harmful images.
These recommendations were put forth under the Online Safety Act (OSA), a significant law designed to shield children and adults from harmful online content.
While the advice is not obligatory, Ofcom is urging social media companies to follow it, announcing plans to release a report in 2027 assessing individual platforms’ responses to the guidelines.
The regulator indicated that the OSA could be reinforced if the recommendations are not acted upon or prove ineffective.
“If their actions fall short, we will consider formally advising the government on necessary enhancements to online safety laws,” Ofcom stated.
Dame Melanie Dawes, Ofcom’s chief executive, has encountered “shocking” reports of online abuse directed at women and girls.
“We are sending a definitive message to tech companies to adhere to practical industry guidance that aims to protect women from the genuine online threats they face today,” Dawes stated. “With ongoing support from our campaigners, advocacy groups, and expert partners, we will hold companies accountable and establish new benchmarks for online safety for women and girls in the UK.”
Ofcom’s other recommendations suggest implementing prompts to reconsider posting abusive content, instituting “time-outs” for frequent offenders, and preventing misogynistic users from generating ad revenue related to their posts. It will also allow users to swiftly block or mute several accounts at once.
These recommendations conclude a process that started in February, when Ofcom conducted a consultation that included suggestions for hash matching. However, more than a dozen guidelines, like establishing “rate limits” on posts, are brand new.
Internet Matters, a nonprofit organization dedicated to children’s online safety, argued that governments should make the guidance mandatory, cautioning that many tech companies might overlook it. Ofcom is considering whether to enforce hash matching recommendations.
Rachel Huggins, co-chief executive of Internet Matters, remarked: “We know many companies will disregard this guidance simply because it is not legally binding, leading to continued unacceptable levels of online harm faced by women and girls today.”
Technology Secretary Liz Kendall has warned that Britain’s internet regulator, Ofcom, may lose public confidence if it doesn’t take adequate measures to address online harm.
During a conversation with Ofcom’s Chief Executive Melanie Dawes last week, Ms. Kendall expressed her disappointment with the slow enforcement of the Online Safety Act, designed to shield the public from dangers posed by various online platforms, including social media and adult websites.
While Ofcom stated that the delays were beyond their control and that “change is underway,” Ms. Kendall remarked to the Guardian: “If they utilize their authority, they risk losing public trust.”
The father of Molly Russell, who tragically took her life at 14 after encountering harmful online material, expressed his disillusionment with Ofcom’s leadership.
Kendall did not offer any support when questioned about his faith in the regulator’s leadership.
Her comments come amidst worries that key components of the online safety framework may not be implemented until mid-2027—nearly four years after the Online Safety Act was passed—and that the rapid pace of technological advancement could outstrip government regulations.
Kendall also voiced significant concerns about “AI chatbots” and their influence on children and young adults.
This concern is underscored by a U.S. case involving teenagers who sadly died by suicide after forming deep emotional bonds with ChatGPT and Character.AI chatbots, treating them as confidants.
“If chatbots are not addressed in the legislation or aren’t adequately regulated—something we are actively working on—they absolutely need to be,” Kendall asserted. “Parents need assurance that their children are safe.”
With Ofcom Chairman Michael Grade set to resign in April, a search for his successor is underway. Ms. Dawes has been CEO for around six years, having served in various roles in public service. Ofcom declined to provide further comment.
Michael Grade will soon step down as chairman of Ofcom. Photo: Leon Neal/Getty Images
On Thursday, regulators imposed a £50,000 fine on the Nudify app for failing to prevent minors from accessing pornography. The app typically uses AI to “undress” uploaded photos.
Mr. Kendall stated that Ofcom is “progressing in the right direction.” This marks the second fine issued by regulators since the law was enacted over two years ago.
He spoke at the launch of a new AI ‘Growth Zone’ in Cardiff, which aims to draw £10 billion in investment and create 5,000 jobs across various locations, including the Ford Bridgend engine factory and Newport.
The government noted that Microsoft is one of the companies “collaborating with the government,” although Microsoft has not made any new investment commitments.
Ministers also plan to allocate £100 million to support British startups, particularly in designing chips that power AI, where they believe the UK holds a competitive edge. However, competing with U.S. chipmaker Nvidia, which recently reported nearly $22 billion in monthly revenue, may prove challenging.
On Wednesday, Labour MPs accused Microsoft of “defrauding” British taxpayers, as U.S. tech firms raked in at least £1.9 billion from government contracts in the 2024-25 financial year.
When asked for his thoughts, Mr. Kendall praised Microsoft’s AI technology being utilized for creating lesson plans in schools within his constituency but emphasized the need for better negotiation expertise to secure optimal deals. He also expressed a desire to see more domestic companies involved, especially in the AI sector.
A Microsoft spokesperson clarified that the NHS procures its services through a national pricing framework negotiated by the UK government, which “ensures both transparency and value for money,” stating that the partnership is delivering “tangible benefits.”
“The UK government chooses to distribute its technology budget among various suppliers, and Microsoft is proud to be one of them,” they added.
The rapid technological advancements can widen the gap between parents and teens. Gen
Moreover, a rise in cyberattacks affecting major companies has been frequently reported. Interestingly, many of those who face these hacks are young individuals equipped with advanced digital skills. In fact, the National Crime Agency reports that one in five children engages in unlawful activities under the Computer Fraud Act, which penalizes unauthorized access to computer systems or data. This statistic rises to 25% among gamers.
To combat this, co-ops adopt a unique preventive strategy. As part of our long-term mission to empower young people to harness their technology skills, Co-op has teamed up with a hacking game aimed at helping talented gamers secure positions in the cybersecurity sector.
This collaborative model is crucial because, as Greg Francis, former senior officer at the National Crime Agency and director of 4D Cyber Security, puts it, “A digital village is necessary to nurture digital natives.” Early intervention is essential, and parents play a pivotal role. “Parents are vital as they wield significant influence, but they shouldn’t remain passive. They should grasp the fundamentals of the hacker universe,” notes Francis, who also serves as Hacking Game’s Cyber Ambassador. So, where to begin?
Show Interest Without Judgment
First and foremost, having an interest in hacking isn’t inherently negative.
“Ethical hacking is an exhilarating and rapidly evolving domain, making it completely understandable for children to find it intriguing,” says Lynne Perry, CEO of children’s charity Barnardo’s. The organization collaborates with co-ops to generate funds to support young individuals in forging positive futures.
Maintaining an open dialogue is just as critical as beginning discussions early. “The ideal moment to start is now,” states Perry. “Once your child shows an interest in online technology, it’s time. Frequent, age-appropriate discussions are essential to keep the lines of communication open.”
Activities that seem innocuous can lead to a path towards cybercrime. Composite: Stocksy/Guardian Design
Perry advises involving children in online activities from a young age. “Explore technology together and discuss what to do if something unusual or concerning occurs. As kids mature, they may seek more independence, but regular interaction allows them to steer conversations, ask questions, and express concerns.”
For parents who grew up in a simpler digital age, grasping the complexities of today’s online gaming, dominated by franchises like Roblox, Minecraft, and Call of Duty, might seem daunting. However, both Francis and Perry emphasize that you don’t need to have all the answers to provide support.
Parents should check game age ratings and utilize parental controls, such as friend-only features, to enhance the security of in-game chats. For online resources, check Ask About Games for detailed information on popular games and guides to setting up safety measures.
It’s also beneficial to inquire if your young gamer has ever experienced being “booted” offline. Booting refers to a DDoS (Distributed Denial of Service) attack, where someone hacks another gamer’s IP address and floods it with data, causing an Internet outage. While booting may seem innocuous among gamers, it is a serious issue. Francis clarifies: “They may not realize this infringes on the Computer Misuse Act.” In fact, booting is identified as one of the initial steps towards cybercrime, as noted during Francis’s work with various prevention programs.
Asking questions aligns with observing potential warning signs like excessive gaming, social withdrawal, unexplained tiredness, unusual purchases of equipment or technology (especially if you’re unaware of how it was paid for), and multiple email addresses. While one sign alone might not be serious, a combination of them can be concerning.
Mary* faced these warning signs firsthand. “I had a son engaged in hacking on the darknet. He isolated himself and avoided sleep. I truly had no clue about his activities,” she shares. “After consulting a cybersecurity expert and discussing my challenges, I discovered he was attempting to delve into the cryptocurrency world on the darknet at just 13 years old.”
Guidance from trusted sources inspires talented young individuals to utilize their skills positively. Composite: Getty Images/Guardian Design
A Transformative Path for Neurodivergent Youth
Particularly for neurodivergent youth, engaging with games and spending time online can yield significant advantages in terms of socialization and emotion regulation. Yet, it’s crucial to recognize that with these benefits come potential drawbacks, including the considerable risks of internet or gaming addiction and the associated allure of cybercrime.
However, over 50% of technology professionals identify as neurodivergent, according to the Tech Talent Charter, indicating vast opportunities for neurodivergent young individuals in this sector. This is why The Hacking Games directly targets “digital rebels” showcasing “raw talent” and “unconventional thinking,” matching them with cybersecurity job opportunities, mentors, and fostering community through Discord group chats.
As Mary can confirm, mentorship and career awareness can be life-changing. “Cyber experts supported my son as a credible source of information and ultimately coached him on my behalf,” she states. “They helped him realize that he could channel his skills for impactful purposes. Consequently, he began assisting others.”
While this situation may seem alarming, there are numerous ways for parents to intervene positively. Approaching the subject with curiosity and care, rather than judgment, is paramount for guiding your child in the right direction. Here are some suggestions for parents who are concerned about their kids.
1 Begin conversations regarding online gaming safety early, approaching the topic with sensitivity rather than judgment. Remaining calm fosters open communication.
2 You don’t need to be fully informed, but a genuine interest can lead to insightful discussions. Ask your child about their games and online activities. Just as you would inquire about who they play with at a park, ask the same about their online friends. Be vigilant for warning signs like strangers trying to befriend them, offering freebies, or inviting them to unfamiliar worlds or games, as these could indicate grooming.
3 Take proactive measures. Pay attention to age ratings for games, which are significant. The best way to ascertain what is suitable for your child is to play the game together or at least observe them while they play. Remember, just like in Call of Duty, children can also be recruited in games like Minecraft. Games with community or “freemium” options can entice young players seeking extra income through in-game purchases or upgrades.
4 Monitor for warning signs such as social withdrawal, excessive gaming, lack of sleep, unusual tech purchases, and multiple email accounts.
5 Engage with your child’s school. Consult their computer science teacher to learn how they promote digital responsibility. Teachers often have insight into which students may require specific support to enhance their skills. This could serve as an early opportunity to channel their talents positively through initiatives like Cyber First and Cyber Choices or coding communities such as Girls Who Code.
*Mary’s name has been changed to protect her family’s anonymity.
Increasing concerns have been raised regarding the federal government’s need to tackle the dangers that children face on the widely-used gaming platform Roblox, following a report by Guardian Australia that highlighted a week of incidents involving virtual sexual harassment and violence.
While role-playing as an 8-year-old girl, the reporter encountered a sexualized avatar and faced cyberbullying, acts of violence, sexual assault, and inappropriate language, despite having parental control settings in place.
From December 10, platforms including Instagram, Snapchat, YouTube, and Kick will be under Australia’s social media ban preventing Australians under 16 from holding social media accounts, yet Roblox will not be included.
Independent councillor Monique Ryan labeled this exclusion as “unexplainable.” She remarked, “Online gaming platforms like Roblox expose children to unlimited gambling, cloned social media apps, and explicit content.”
At a press conference on Wednesday, eSafety Commissioner Julie Inman Grant stated that platforms would be examined based on their “singular and essential purpose.”
“Kids engaging with Roblox currently utilize chat features and messaging for online gameplay,” she noted. “If online gameplay were to vanish, would kids still use the messaging feature? Likely not.”
Sign up: AU breaking news email
“If these platforms start introducing features that align them more with social media companies rather than online gaming ones, we will attempt to intervene.”
According to government regulations, services primarily allowing users to play online games with others are not classified as age-restricted social media platforms.
Nonetheless, some critics believe that this approach is too narrow for a platform that integrates gameplay with social connectivity. Nyusha Shafiabadi, an associate professor of information technology at Australian Catholic University, asserts that Roblox should also fall under the ban.
She highlighted that the platform enables players to create content and communicate with one another. “It functions like a restricted social media platform,” she observed.
Independent MP Nicolette Boere urged the government to rethink its stance. “If the government’s restrictions bar certain apps while leaving platforms like Roblox, which has been called a ‘pedophile hellscape’, unshielded, we will fail to safeguard children and drive them into more perilous and less regulated environments,” she remarked.
Communications minister spokesperson Annika Wells mentioned that excluding Roblox from the teen social media ban does not imply that it is free from accountability under the Online Safety Act.
A representative from eSafety stated, “We can extract crucial safety measures from Roblox that shield children from various harms, including online grooming and sexual coercion.”
eSafety declared that by the year’s end, Roblox will enhance its Age Verification Technology, which restricts adults from contacting children without explicit parental consent and sets accounts to private by default for users under 16.
“Children under 16 who enable chat through age estimation will no longer be permitted to chat with adults. Alongside current protections for those under 13, we will also introduce parental controls allowing parents to disable chat for users between 13 and 15,” the spokesperson elaborated.
Should entities like Roblox not comply with child safety regulations, authorities have enforcement capabilities, including fines of up to $49.5 million.
eSafety stated it will “carefully oversee Roblox’s adherence to these commitments and assess regulatory measures in the case of future infractions.”
Joanna Orlando, an expert on digital wellbeing from Western Sydney University, pointed out that Roblox’s primary safety issues are grooming threats and the increasing monetization of children engaging with “the world’s largest game.”
She mentioned that it is misleading to view it solely as a video game. “It’s far more significant. There are extensive social layers, and a vast array of individuals on that platform,” she observed.
Green Party spokesperson Sarah Hanson-Young criticized the government for “playing whack-a-mole” with the social media ban.
“We want major technology companies to assume responsibility for the safety of children, irrespective of age,” she emphasized.
“We need to strike at these companies where it truly impacts them. That’s part of their business model, and governments hesitate to act.”
Shadow communications minister Melissa Mackintosh also expressed her concerns about the platform. She stated that while Roblox has introduced enhanced safety measures, “parents must remain vigilant to guard their children online.”
“The eSafety Commissioner and the government carry the responsibility to do everything within their power to protect children from the escalating menace posed by online predators,” she said.
A representative from Roblox stated that the platform is “dedicated to pioneering safety through stringent policies that surpass those of other platforms.”
“We utilize AI to scrutinize games for violating content prior to publication, we prohibit users from sharing images or videos in chats, and we implement sophisticated text filters designed to prevent children from disclosing personal information,” they elaborated.
Families and survivors involved in pro-suicide forums are urging for a public inquiry into the government’s inaction regarding online safety issues.
This demand follows a report revealing that a coroner had expressed concerns about suicide forums to three government departments at least 65 times since 2019.
The report also indicated that methods promoted via these platforms are associated with at least 133 deaths in the UK, including the youngest identified victim, only 13 years old.
The analysis, released by the Molly Rose Foundation—established after the tragic loss of 14-year-old Molly Russell in November 2017—stemmed from a comprehensive review of coroner reports aimed at preventing future fatalities.
Their findings stated that the Department of Health and Social Care, the Home Office, and the Department of Science, Innovation and Technology all neglected to heed warnings from coroners about the risks posed by pro-suicide forums.
In correspondence to the Prime Minister, the Survivors’ Group for Preventing Online Suicide Victims expressed their “disappointment regarding the sluggish governmental response to an urgent threat, despite numerous alerts to safeguard lives and mitigate harm.”
The letter stated: “These failures necessitate a legal response, not only to comprehend the circumstances surrounding our loved ones’ deaths but also to avert similar tragedies in the future.
“It’s critical to focus on change over blame, to protect vulnerable youth from entirely preventable dangers.”
Among the letter’s signatories is the family of Amy Walton, who succumbed after engaging with pro-suicide material online.
The foundation is advocating for a public inquiry to examine the Home Office’s inadequacies in enforcing stricter regulations on harmful substances and Ofcom’s lack of action against the threats posed by pro-suicide forums.
Andy Burrows, the chief executive of the Molly Rose Foundation, emphasized that the report highlights how the government’s ongoing failures to protect its vulnerable citizens have resulted in numerous tragic losses due to the dangerous nature of suicide forums.
He remarked: “It’s unfathomable that Ofcom has left the future of a forum that aims to manipulate and pressure individuals into asserting their own lives at risk, rather than quickly and decisively moving to legally shut it down in the UK.”
“A public inquiry is essential to derive crucial lessons and implement actions that could save lives.”
The push for an inquiry has the backing of the law firm Leigh Day, which represents seven clients who have experienced loss.
A government spokesperson stated:“Suicide impacts families deeply, and we are resolute in our commitment to hold online services accountable for ensuring user safety on their platforms.
“According to online safety regulations, these services must take necessary actions to prevent access to illegal suicidal and self-harm content and safeguard children from harmful materials promoting such content.
“Moreover, the substances involved are strictly regulated and require reporting under the Toxic Substances Act. Retailers must alert authorities if they suspect intent to misuse them for harm. We will persist in our investigation of hazardous substances to ensure appropriate safeguards are in place.”
A spokesperson for Ofcom remarked: “Following our enforcement initiatives, online suicide forums have implemented geo-blocking to restrict access from users with UK IP addresses.
“Services opting to block access for UK users must not promote or support methods to bypass these restrictions. This forum remains under Ofcom’s scrutiny, and our investigation will continue to ensure the block is enforced.”
Members of Parliament have cautioned that if online misinformation is not effectively tackled, it is “just a matter of time” before viral content leads to a resurgence of violence in the summer of 2024.
Chi Onwurah, chair of the Commons science and technology select committee, expressed concern that ministers seem complacent regarding the threat, placing public safety in jeopardy.
The committee voiced its disappointment with the government’s reaction to a recent report indicating that the business models of social media companies are contributing to unrest following the Southport murders.
In response to the committee’s findings, the government dismissed proposals for legislation aimed at generative artificial intelligence platforms, maintaining that it would refrain from direct intervention in the online advertising sector, which MPs argued has fostered the creation of harmful content post-attack.
Onwurah noted that while the government concurs with most conclusions, it fell short of endorsing specific action recommendations.
Onwurah accused ministers of compromising public safety, stating: “The government must urgently address the gaps in the Online Safety Act (OSA); instead, it seems satisfied with the harm caused by the viral proliferation of legal but detrimental misinformation. Public safety is at stake, and it’s only a matter of time before we witness a repeat of the misinformation-driven riots of summer 2024.”
In their report titled ‘Social Media, Misinformation and Harmful Algorithms’, MPs indicated that inflammatory AI-generated images were shared on social media following the stabbing that resulted in the deaths of three children, warning that AI tools make it increasingly easier to produce hateful, harmful, or misleading content.
In a statement released by the commission on Friday, the government stated that no new legislation is necessary, insisting that AI-generated content already falls under the OSA, which regulates social media content. They argued that new legislation would hinder its implementation.
However, the committee highlighted Ofcom’s evidence, where officials from the communications regulator admitted that AI chatbots are not fully covered by the current legislation and that further consultation with the tech industry is essential.
The government also declined to take prompt action regarding the committee’s recommendation to establish a new entity aimed at addressing social media advertising systems that allow for the “monetization of harmful and misleading content,” such as misinformation surrounding the Southport murders.
In response, the government acknowledged concerns regarding the lack of transparency in the online advertising market and committed to ongoing reviews of industry regulations. They added that stakeholders in online advertising seek greater transparency and accountability, especially in safeguarding children from illegal ads and harmful products and services.
Addressing the commission’s request for additional research into how social media algorithms amplify harmful content, the government stated that Ofcom is “best positioned” to determine if an investigation should be conducted.
In correspondence with the committee, Ofcom indicated that it has begun working on a recommendation algorithm but acknowledged the necessity for further exploration across a broader spectrum of academic and research fields.
The government also dismissed the commission’s call for an annual report to Parliament concerning the current state of online misinformation, arguing that it could hinder efforts to curtail the spread of harmful online information.
The British government defines misinformation as the careless dissemination of false information, while disinformation refers to the intentional creation and distribution of false information intended to cause harm or disruption.
Onwurah highlighted concerns regarding AI and digital advertising as particularly troubling. “Specifically, the inaction on AI regulation and digital advertising is disappointing,” she stated.
“The committee remains unconvinced by the government’s assertion that the OSA adequately addresses generative AI, and this technology evolves so swiftly that additional efforts are critically needed to manage its impact on online misinformation.
“And how can we combat that without confronting the advertising-driven business models that incentivize social media companies to algorithmically amplify misinformation?”
a As an FBI Radio movie host, our mission in life and love is to liberate cinema from excessive privilege. Through our experiences as filmmakers and critics, we’ve navigated the complexities of “industry standards” to uncover the true essence of film, repeatedly synthesizing our insights. . Hence, I ventured into Tomb Raider, delving into a treasure trove of orange hard drives in anticipation of the film festival at the Sydney Opera House.
Much like Tame Impala, most of these videos are solo acts concealed behind a collaborative façade. Typically, you’ll find a woman on the brink of a breakthrough, harnessing tools like the iPhone. Feed a man a fish, and he eats for a day; hand two filmmakers a list, and you’ll provide them work for life.
1. Puppy
Do you allow Instagram content?
This article contains content provided by Instagram. You may be using cookies or other tracking technologies, so you will need to grant permission before anything is loaded. To view this content, Click “Get permission and continue.”.
Puppycodes serves as a largely inactive Instagram archive, yet Alice Barker emerges as the most prominent figure of the era. She utilized code to unearth the most unique videos, all of which hover between the boundaries of danger and adorability. Much like other comedic geniuses, she embodies profound empathy. Alice founded support.fm, a non-profit bail fund supporting trans and gender-nonconforming individuals in detention. Each video strikes a uniquely different chord, deserving of its own soft and deep significance. Her presence feels like an innovative film on the grid. Were these observations enough to illustrate the humor found in her creative output? Like the charm of a Bear Emmy, puppies encapsulate both drama and comedy.
2. Caitupdate: New Lip Palette!
Do you allow Instagram content?
This article features content sourced from Instagram. You may need to provide consent as you might be using cookies or other tracking technologies before viewing. To see this content, Click “Get permission and continue.”.
Does MUA signify a makeup artist, or has the sound of a kiss been detailed? In this instance, Macy Rodman emulates transgender activist Caitlyn Jenner while trying out a new lip palette. Caitupdate represents one of the numerous unexpected ways Macy has astounded us. Her musical ventures are exceptional; her podcast Nymphoires humorously redefines entertainment. An incredible performer, her live acts are unparalleled.
Macy generously contributed her music to our inaugural feature film, Grape Steak, which screened modestly at the Spectacle Theatre. This might come across as boastful, yet while we have this platform with the Guardian, we’re honored to host events in Greenwich Village, celebrating the premiere of Season 2 and engaging with content of that magnitude. #mua #ithoughtatwasakissssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss
3.
Do you allow Instagram content?
This article contains content provided by Instagram. You may have to allow cookies or third-party tools before anything is loaded. To view this content, Click “Get permission and continue.”.
Does MUA denote a makeup artist, or merely signify the sound of a kiss? Here, Macy Rodman captures the essence of transgender icon Caitlyn Jenner testing a new lip palette. Caitupdate represents one of the multifaceted approaches Macy uses to captivate us. Her music is exceptional, and her podcast Nymphoires offers an unmatched comedic experience. She is a phenomenal actress, and her live performances are unparalleled entertainment.
Macy generously donated her music to our debut film, Grape Steak, which had a modest screening at the Spectacle Theatre. While this may come across as bragging, during our time with the Guardian platform, we’re thrilled to host events in Greenwich Village and enjoy the launch of Season 2 with such splendid content. #mua #ithoughtatwasakissssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss
3.
Do you allow Instagram content?
This article contains content provided by Instagram. You may be using cookies or other tracking technologies, so you’ll need to give permission before viewing content. To see this content, Click “Get permission and continue.”.
Who claims that makeup artists cannot transform their cinematic experiences into magic? The lush Massacr, a legal strategist turned full-time drug evaluator, navigates the lens while dissecting local shopping centers and moral quandaries across the USA. Each of the lush videos constitutes a generous archive, typically filmed in landscape mode; they uniquely blend humor, advocacy, and sales advice amidst glimpses of privacy infringement. She scouts for public arenas that evoke amicable atmospheres. We take her seriously as she is truly a filmmaker worthy of exploring the depths of your device. Her sound design is unrivaled. “Do not do it, little girl,” has become a guiding principle, while “brick” serves almost as a warning—”hiiiii!” should undoubtedly win the Pantone Color of the Year™ award.
4. GALPALZ Episode 2: Another Simple Day with Zero Consequences
TThe Ofcom regulators, equipped with clipboards, navigated the exhibition space during the International Adult Industry Conference in Prague over the weekend, aiming to motivate 1,700 attendees to adhere to the UK’s newly implemented online safety regulations.
“Be truthful,” a regulator addressed a crowd of porn site operators and staff during a midday seminar discussing the age verification requirements that were set in motion in July as part of the legislative framework for accessing adult content. “Be transparent. If your efforts fall short, include them in the risk assessment.”
Attendees enjoying complimentary champagne offered by conference sponsors posed some uneasy inquiries. What steps should a company take if it lacks the funds for age verification? How hefty are the penalties? Can a site circumvent regulations by blocking traffic from the UK? Were Ofcom officials aware that some site owners might be trying to undermine their competitors by reporting them for non-compliance?
Presentation by Ofcom at the Wyndham Diplomat Hotel in Prague, Czech Republic. Photographer: Bjoern Steinz/Björn Steinz/Panos Pictures
“We are here to assist you,” another Ofcom regulator explained to an audience of around 50 men and seven women. “It’s a challenge. There’s a wealth of information to absorb, but we exist to assist members of the adult industry in achieving compliance.”
Seven weeks following the activation of the online safety law, Ofcom officials seek to portray the adult industry’s response to this legislation in a positive light. They noted that most of the leading 10 and top 100 adult websites have either implemented age verification checks or restricted access within the UK. Platforms like X and Reddit, which feature pornographic content, also provide age verification guarantees. In August, views surged to 7.5 million on the top five age verification websites, up from 1 million in June.
Regulators intend to frame the introduction of age verification on July 27 as a pivotal moment for the industry, dubbing it “AV Day,” when children’s access to British porn would be unequivocally obstructed. The situation, however, is more nuanced.
Ofcom screen at the Prague conference. Photographer: Bjoern Steinz/Björn Steinz/Panos Pictures
In the days following the law’s enactment, there was a notable spike in VPN downloads, enabling users to disguise their locations and bypass age verification prompts.
“This development was quite unfortunate,” commented Mike Stabile, director of public policy for the Free Speech Coalition, representing American adult entertainment. He mentioned moving to a location that did not comply and opted for a non-compliant site. “VPN usage has surged. People are not compliant. Traffic is redirecting to piracy sites. I don’t think Ofcom will regard this outcome as what they intended.”
Corey Silverstein, an American attorney representing several companies in the adult industry and who has encountered numerous failed attempts at enforcing age verification laws in the U.S., noted a significant skepticism towards regulators. “While people maintain professionalism and politeness, this is not the most agreeable audience. Some display overt disdain. You can sense the discomfort in participating in an event like this.”
Despite this, he delivered a presentation for site owners, advising them to confront their aversion to regulators and collaborate with Ofcom to implement new guidelines.
“Their intent is not to harm your business. They are quite friendly. They aren’t out to eliminate you,” he stated. “As I understand it, they do not even impose financial fines. Their goal is to guide you towards compliance.”
Ofcom officials were dressed in neatly pressed white shirts, working amid the ambient sounds of steel drums, distributing A4 printed questionnaires while sponsors served cocktails and a troupe of feather-clad dancers entertained attendees.
The paper form, which allowed for anonymous responses, requested representatives to indicate whether they had adopted age verification in the UK and to discuss reasons for inaction regarding non-compliant businesses. By Saturday evening, Ofcom officials noted that an insufficient number of representatives had completed the form but remained hopeful for better participation on Sunday.
Though no fines have yet been issued under the Online Safety Act, Ofcom has initiated 12 investigations into over 60 porn services, including websites and applications.
Updates from these investigations have fueled discontent among adult site proprietors, who are also advocating for stricter regulations in the U.S. and France. Yet, there was some begrudging acknowledgment of Ofcom’s effort to engage with the industry in events and dialogues.
“In the U.S., regulatory bodies often shy away from engaging with us,” remarked Alex Kekeshi, vice president of PornHub’s brand and community. “I appreciate Ofcom’s invitation to the table. Such engagement is often overlooked in discussions on industry regulations.”
Before July 27, Ofcom established a specialized porn portfolio team consisting of six compliance officers to encourage businesses to meet regulatory standards. These team members requested anonymity due to conservative reasons but participated in similar discussions in Berlin, Amsterdam, and LA. Additionally, a larger team of over 40 staff members is focused on investigating organizations that fail to comply.
“We are acutely aware of the industry’s scale and the ease of establishing services for distributing pornographic content indiscriminately,” one regulator remarked. “We are not claiming to lead every service towards compliance; our strategy is to allocate resources where children face the highest risk of harm.” When penalties are applied, they are designed with a deterrent effect, potentially reaching up to £18 million or 10% of global revenue.
“Companies can opt not to risk being pursued by us or facing penalties. We aim to shift the incentive balance so that compliance is deemed less risky.”
Another Ofcom representative avoided commenting on the increase in VPN downloads, asserting that the law’s purpose is to prevent children from inadvertently encountering pornographic content (rather than going after those who deliberately seek it).
In their bid to comply with the new age verification requirements, site owners are also addressing the challenges posed by AI-generated content, urging users to engage of Ofcom’s attention and seeking to prevent companies like Visa and MasterCard from processing payments linked to violent and illegal content. Sites and applications featuring AI-generated pornography also fall under the scope of this legislation.
“How can we distinguish between a 15-year-old AI model and one that represents an 18- or 19-year-old within compliance frameworks?” one attendee questioned, expressing concerns about the potential for AI to inadvertently generate child sexual abuse material.
Steve Jones, who operates an AI porn site, stressed that AI systems need to be programmed to acknowledge what is deemed inappropriate. “We must ensure that depictions are not too youthful or flat. We will disallow pigtails, braces, and toys typically associated with children,” he stated. “AI lacks the ability to differentiate between youthful-looking adults and minors. It’s crucial to teach these distinctions.”
on the day 22-year-old Tyler Robinson shot and killed right-wing activist Charlie Kirk, prosecutors claim he texted his roommates to confess to the act. While admitting to the murder and seemingly indicating he intended to reclaim his firearm, he shifted the conversation to his motivation for inscribing messages on the ammunition.
“Remember how I was carving the bullets. The messages are almost a big meme,” Robinson texted.
Robinson’s shooting of Kirk underscores the intersection of political violence and a growing nihilisticonline environment that fosters misinformation and extremism. This convergence raises significant questions about the impact of internet culture on the nature and understanding of extremist actions.
Robinson was heavily engaged with online platforms and seemed to enjoy video gaming. A friend described him as “Online at the end,” noting his activity on Discord, a messaging service popular among the gaming community.
The bullets he allegedly fired boreniche internet references and phrases, such as “What is this?”, alluding to sexual memes within online furry communities, “If you read this, you’re gay LMAO,” and “Hey fascist! Catch!” referencing the game Far Cry 6.
In conversations with his roommate, with whom he had a romantic relationship, Robinson appeared to contemplate how his ironic messages would be interpreted.
He even mentioned “UWU’s awareness” in Fox News [sic] highlighting the absurdity of certain responses,” Robinson texted.
Robinson exemplifies not only a product of online culture, but also aligns with a contemporary trend where attackers feel compelled to leave behind a message. The increase in manifestos and single-sentence declarations online, be it a full manifesto or a brief phrase, has been notable in recent years.
The manifesto left by the neo-Nazis who murdered 51 people in Christchurch, New Zealand in 2019 included extreme white nationalist ideologies and “shitposting” style ironic references related to video games and podcasters.The shooter who opened fire in a supermarket in El Paso, Texas the same year announced his attack on the 8chan message board while creating memes that encouraged others to “achieve high scores” with body counts.
The mass shooting in predominantly Black areas of Buffalo, New York in 2022 and the Poway synagogue shooting in California in 2019, both echoed with the language used in fringe online forums. A 2022 survey by multiple newsrooms found thousands of messages from international neo-Nazi networks, showcasing exchanges filled with memes and gaming slang as they plotted violence.
Moreover, attackers frequently engage with each other in far-right circles, celebrating individuals as “saints” on memorial days or mimicking elements from previous attacks. As noted by others, Robinson’s inscriptions on bullet cases closely resemble the meme messages left on ammunition and firearms by a young shooter from a Minneapolis Catholic School attack, echoing the patterns left by United Healthcare CEO shooter Luigi Mangione on bullets, which was visible on a popular alt-fashion brand’s shirt.
Robinson’s messages do not provide a clear motive for Kirk’s murder. Prosecutors claim Robinson indicated he shot Kirk because he believed conservative activists were perpetuating hatred. His mother reportedly stated that her son had “become more political, more inclined towards the left, and supportive of gay and trans rights.”
However, the path to Robinson’s radicalization remains unclear. There is a vast gulf between opposing Kirk’s ideologies and enacting targeted violence. Experts increasingly contend that the motivations behind such actions, especially among young individuals, are shaped more by the fragmented and chaotic online landscape of modern politics rather than fitting neatly into traditional political categories. Neglecting the radical nature of these individuals in favor of simplified narratives can obscure the factors driving them towards extremist violence.
Radicalization of being online
Rather than striving to decipher the exact meanings behind the sarcastic trolling messages left by attackers, researchers studying extremism argue that understanding how online media contributes to widespread radicalization is more valuable. In fact, many suggest that the current era of political violence is markedly different from past occurrences due to the influential role of social media and online communities in radicalizing and isolating users.
While technological factors represent only part of the rise in political violence—alongside mental health concerns, political polarization, and easy access to firearms in the U.S.—extremist researchers increasingly focus on how social media platforms and online environments evolve to foster radicalization.
In a 2023 paper by George Washington University’s Project on Extremism, Jacob Wear, explained the emergence of what he termed the “third generation of online radicals” in the late 2010s. Characteristics of this generation include how memetic culture facilitates radicalization and normalizes attacks, as well as a shift away from ideology and group affiliations towards individual acts of violence. Wear argues that online culture surrounding violence and extremism blurs the conventional boundaries of terrorism, spurring content designed to showcase acts of violence.
“Global grievances are expressed with great intensity in localized contexts, yet the primary audience often remains online,” writes Wear.
The expansion of social media and the erosion of traditional gatekeeping have muddled strategies to combat escalating online radicalization, especially given shifts in social media platforms. Responsibility for hosting violent and extreme content has become a contentious issue. What was once a standard policy among media organizations and platforms to refrain from disseminating a perpetrator’s manifesto has evolved into a public health debate among researchers, deteriorating as social media platforms replaced amateur detectives who amplified the digital footprints of individuals for perilous discoveries. Furthermore, as messages and memes from attackers spread more effortlessly, riffs about violence produce more posts, transforming them into consumable content. This represents a particularly grim aspect of an industry that has thrived by algorithmically promoting politically divisive and extremist content.
Consequently, online culture has become intertwined with extremism and political violence, increasingly blurring lines as previously extremist internet culture permeates everyday online experiences. The use of sarcastic humor associated with violence and extremism isn’t new to the digital landscape—a 1944 essay debated how factions entertained themselves with euphemisms of hate, yet has now become a prominent feature of online interaction. Ideologies and memes that were once confined to obscure message boards and extremist sites now serve as the common language of the Internet, disseminated across mainstream social media platforms.
Kirk is also a product of this online milieu, widely recognized for his confrontational, debate-style clips that have gone viral, stirring reactions from various political audiences.
The footage of Kirk’s murder has since propagated through the same online ecosystem that once rendered him omnipresent, now autoplaying on X without caution for viewers. The aftermath of his death has blended into the same content machine, with video essays analyzing the murder and AI-generated tributes portraying his legacy online. One aspiring influencer who attended the event where Kirk was fatally shot attempted to exploit the chaos for content, posting videos promoting his social media channels amid the turmoil.
“Make sure to subscribe!” the TikToker, who later deleted the video, exclaimed while flashing peace signs as attendees screamed and fled.
WThe chief executive of the Financial Times suggested this summer at a media conference that competing publishers might explore a “NATO” alliance to bolster negotiations with artificial intelligence firms.
Nevertheless, John Slade’s announcement regarding a “pretty sudden, sustained” drop in traffic from readers via search engines quickly highlighted the grave threat posed by the AI revolution.
Queries submitted on platforms like Google, which dominate over 90% of the search market, have been central to online journalism since its inception, with news outlets optimizing their headlines and content to secure high rankings and lucrative clicks.
Currently, Google’s AI summary appears at the top of the results page, presenting answers directly and reducing the need for users to click through to the original content. The introduction of the AI mode tab, which responds to queries in a chatbot format, has sparked fears of a future dominated by “Google Zero,” where referral traffic dwindles.
“This is the most significant change in search I’ve witnessed in decades,” states a senior editorial tech executive. “Google has historically been a reliable partner for publishers. Now, certain aspects of digital publishing are evolving in ways that could fundamentally alter the landscape.”
Last week, the owner of the Daily Mail revealed that the AI summary was officially in place following Click-Through traffic to a competitive market review of Google’s search services.
DMG Media and other major news organizations, including the Guardian Media Group and the Magazine Trade Body, the PPA, have advocated for the competitive watchdog. Urge Google for more transparency regarding AI summaries and traffic metrics provided to publishers as part of an investigation into tech company search monopolies.
Publishers are already experiencing financial strain from rising costs, declining advertising revenue, reduced print circulation, and changing readership trends. Google insists that they must accept agreements regarding how their content is utilized in AI systems or face the loss of all search results.
Besides the funding threat, concerns about AI’s impact on accuracy persist. Historical iterations advised users to consume harmful items, and although Google has since enhanced its summaries, the issue of “hallucinations” — where AI presents inaccurate or fabricated information as truth — remains, alongside inherent biases when machines, not humans, interpret sources.
Google Discover has supplanted search with content as the primary source of traffic clicks. Photo: Samuel Gibbs/The Guardian
In January, Apple pledged to improve its AI feature that summarized BBC News alerts with the company’s logo on the latest iPhone model. The alert misleadingly stated that a man accused of murdering a US insurance executive had taken his own life and falsely claimed that tennis star Rafael Nadal had come out as gay.
Last month, in a blog post, Liz Reid, Google’s search manager, claimed that AI had not yet been integrated into searches. “Driving more queries and quality clicks”.
“This data contradicts third-party reports that inaccurately suggest a drastic reduction in overall traffic,” she stated. “[These reports] are often based on flawed methodologies, isolated instances, or traffic alterations that occurred prior to the deployment of AI functionalities during searches.”
She also mentioned that overall traffic to all websites remains “relatively stable,” though “spacious” webs mean that user trends are redirecting traffic to different sites.
Recently, Google Discover, which delivers articles and videos tailored to user behavior, has taken precedence over search as the main source of traffic.
However, David Buttle, founder of DJB Strategy, stated that the services linked to publisher search transactions do not supply the quality traffic most publishers require to support their long-term strategies.
“Google Discover holds no product significance for Google,” he explained. “As traffic from general search diminishes, Google can concentrate more traffic on publishers. Publishers are left with no choice but to comply or face losing organic search, which often rewards ClickBaity content.”
Simultaneously, publishers are engaged in a broader struggle against AI companies looking to exploit content to train extensive language models.
The creative sector is rigorously lobbying the government to prevent AI firms from using copyrighted materials without authorization, urging for legislation.
The February Make It Fair campaign highlighted threats to the creative sector posed by Generative AI. Photo: Geoffrey Swaine/Rex
Some publishers have reacted against bilateral licensing agreements with AI companies, including the Financial Times, German media group Axel Springer, the Guardian, and Nordic publisher Schibsted. Others, like the BBC, have initiated actions against AI companies for alleged copyright infringement.
“It’s a double-edged attack on publishers, almost a ‘Pinker move’,” remarks Chris Duncan, a senior executive at News UK and Bauer Media, also leading the consultancy Seadelta. “Content is vanishing into AI products without appropriate compensation, while AI summaries are embedded within products, negating the need for clicks and effectively draining revenue from both ends. It’s an existential crisis.”
Publishers are pursuing various courses of action, from negotiations and litigation to regulatory lobbying, while also integrating AI tools into their newsrooms, as seen with the Washington Post and Financial Times launching their AI-powered chatbots and solutions for climate inquiries.
Christoph Zimmer, chief product officer at Germany’s Der Spiegel, notes that while current traffic remains steady, he anticipates a decline in referrals from all platforms.
“This is part of a longstanding trend,” he states. “However, it has affected brands that haven’t prioritized direct audience relationships or subscription growth in recent years, instead depending on broad content reach.”
“What has always been true remains valid. Prioritizing quality and diverse content is essential; it’s about connecting with people, not merely chasing algorithms.”
Publication industry leaders emphasize that efforts to negotiate deals for AI models to aggregate and summarize news are rapidly being replaced by advancements in models interpreting live news updates.
“The initial focus was on licensing arrangements for AI training to ‘speak English,’ but that will become less relevant over time,” asserts an executive. “We’re transitioning towards providing news directly. To achieve this, we require precise, live sources — a potentially lucrative market publishers are keen to explore next.”
PPA CEO Saj Merali emphasizes the need for a fair equilibrium between technology-induced changes in consumer digital behavior and the just compensation for trustworthy news.
“What remains at the core is something consumers require,” she explains. “AI needs credible content. There’s a shift in how consumers prefer to access information, but they must have confidence in what they read.”
“The industry has historically shown resilience through significant digital and technological transitions, yet it is crucial to ensure pathways that sustain business models. At this point, the AI and tech sectors have shown no commitment to support publishers’ revenue.”
The enigmatic three-person game development team based in Adelaide has stirred up a storm on the global online gaming scene.
On Friday, major platforms like Steam, Nintendo’s eShop, PlayStation Store, and Microsoft Store all experienced crashes as they struggled to keep up with the demand for Hollow Knight: Silksong, the eagerly awaited sequel to the acclaimed 2017 indie sensation, Hollow Knight.
The game’s launch resulted in widespread outages, with thousands of players reporting difficulties in purchasing the game during the initial hours of its release. Many faced persistent error messages for almost three hours post-launch, preventing them from completing their transactions.
The spike in demand was evident on down detectors across troubleshooting platforms, which recorded a surge to 3,750 users immediately after the game’s launch, slowly diminishing thereafter.
Social media erupted with complaints about error codes and shared screenshots as frustrated gamers expressed their disappointment over the absence of pre-order options. Some labeled the situation as “absurd,” while others criticized the lack of measures to prevent such congestion.
Sign up: AU Breaking NewsEmail
Another digital retailer, Humble Bundle, indicated that the game was momentarily unavailable due to high traffic, although this notification was later removed once the situation stabilized.
Regardless of these technical challenges, Steam noted over 100,000 active players within just 30 minutes of launch, implying that many managed to secure their copies.
Hollow Knight was crafted by Ari Gibson, William Pellen, and Jack Vine, along with music from Christopher Larkin, representing the Adelaide-based indie studio, Team Cherry. Set in a vividly imagined realm of insect warriors, the game has garnered a passionate following since its debut in 2017, selling over 15 million copies globally.
Hollow Knight: Silksong screenshot. Illustration: Team Cherry
The New York Times recently hailed the original Hollow Knight as a “Modern Metroidvania Classic,” praising its “engaging and detailed hand-drawn animations, challenging boss encounters, and twists with secret pathways.”
The original has achieved cult status, largely through word-of-mouth recommendations. Anticipation for a sequel focusing on Hornet, the sword-wielding princess who served as a supporting character in the first game, has grown, as highlighted by a recent New York Times report that discussed at least seven new game developments avoiding past conflicts and employing gaming demons in baby steps and tactical roles in Walking Sims.
The development of the game was financed independently, though a South Australian film company celebrated Team Cherry’s global success on Friday, stating, “This small team of combat developers in Adelaide showcases world-leading talent and the creative excellence that emerges from South Australia.”
In a recent Bloomberg interview, Gibson mentioned that the seven-year development timeline of Silksong is entirely attributed to the team’s choice of project.
“We’re a small team, and it takes us considerable time to create the game,” he explained. “There wasn’t any significant controversy surrounding it.”
In a previous discussion with ABC, Pellen attributed the original Hollow Knight’s lasting appeal to its blend of classic inspiration and modern aesthetics.
“What was gratifying about Hollow Knight was that we crafted something according to our tastes, leading to a slightly unique product,” Pellen stated in the ABC interview. “We hope Silksong can achieve something similar.”
In a few months, Australian teenagers may face restrictions on social media access until they turn 16.
As the December implementation date approaches, parents and children are left uncertain about how this ban will be enforced and how online platforms will verify users’ ages.
Experts are anticipating troubling outcomes, particularly since the technology used by social media companies to determine the age of users tends to have significant inaccuracies.
From December 10th, social media giants like Instagram, Facebook, X, Reddit, YouTube, Snapchat, and TikTok are required to remove or deactivate any accounts for users under 16 in Australia. Failing to comply could result in fines reaching up to $49.5 million (around $32 million USD), while parents will not face penalties.
Prior to the announcement of the ban, the Australian government initiated a trial on age verification technology, which released preliminary findings for June, with a comprehensive report expected soon. This study aimed to test an age verification tool on over 1,100 students across the country, including indigenous and ethnically diverse groups.
Andrew Hammond from KJR, the consulting firm based in Canberra that led the trial, shared an anecdote illustrating the challenge at hand. One 16-year-old boy’s age was inaccurately guessed to be between 19 and 37.
“He scrunched up his face and held his breath, turning red and puffy like an angry older man,” he said. “He didn’t do anything wrong; we wanted to see how our youth would navigate these systems.”
Other technologies have also been evaluated with Australian youth, such as hand gesture analysis. “You can estimate someone’s age broadly based on their hand appearance,” Hammond explains. “While some children felt uneasy using facial recognition, they were more comfortable with hand assessments.”
The interim report indicated that age verification could be safe and technically viable; previous headlines noted that while challenges exist, 85% of subjects’ ages could be accurately estimated within an 18-month range. If a person initially verified as being over 16 is later identified as under that age, they must undergo more rigorous verification processes, including checks against government-issued IDs or parental verification.
Hammond noted that some underage users can still be detected through social media algorithms. “If you’re 16 but engage heavily with 11-year-old party content, it raises flags that the social media platform should consider, prompting further ID checks.”
Iain Corby from the London Association of Age Verification Providers, which supported the Australian trial, pointed out that no single solution exists for age verification.
The UK recently mandated age verification on sites hosting “harmful content,” including adult material. Since the regulations went into effect on July 25th, around 5 million users have been verifying their ages daily, according to Corby.
“In the UK, the requirement is for effective but not foolproof age verification,” Corby stated. “There’s a perception that technology will never be perfect, and achieving higher accuracy often requires more cumbersome processes for adults.”
Critics have raised concerns about a significant loophole: children in Australia could use virtual private networks (VPNs) to bypass the ban by simulating locations in other nations.
Corby emphasized that social media platforms should monitor traffic from VPNs and assess user behavior to identify potential Australian minors. “There are many indicators that someone might not be in Thailand, confirming they could be in Perth,” he remarked.
Apart from how age verification will function, is this ban on social media the right approach to safeguarding teenagers from online threats? The Australian government asserted that significant measures have been implemented to protect children under 16 from the dangers associated with social media, such as exposure to inappropriate content and excessive screen time. The government believes that delaying social media access provides children with the opportunity to learn about these risks.
Various organizations and advocates aren’t fully convinced. “Social media has beneficial aspects, including educational opportunities and staying connected with friends. It’s crucial to enhance platform safety rather than impose bans that may discourage youth voices,” stated UNICEF Australia on its website.
Susan McLean, a leading cybersecurity expert in Australia, argues that the government should concentrate on harmful content and the algorithms that promote such material to children, expressing concern that AI and gaming platforms have been exempted from this ban.
“What troubles me is the emphasis on social media platforms, particularly those driven by algorithms,” she noted. “What about young people encountering harmful content on gaming platforms? Have they been overlooked in this policy?”
Lisa Given from RMIT University in Melbourne explained that the ban fails to tackle issues like online harassment and access to inappropriate content. “Parents may have a false sense of security thinking this ban fully protects their children,” she cautioned.
The rapid evolution of technology means that new platforms and tools can pose risks unless the underlying issues surrounding harmful content are addressed, she argued. “Are we caught in a cycle where new technologies arise and prompt another ban or legal adjustment?” Additionally, there are concerns that young users may be cut off from beneficial online communities and vital information.
The impact of the ban will be closely scrutinized post-implementation, with the government planning to evaluate its effects in two years. Results will be monitored by other nations interested in how these policies influence youth mental health.
“Australia is presenting the world with a unique opportunity for a controlled experiment,” stated Corby. “This is a genuine scientific inquiry that is rare to find.”
TikTok is jeopardizing the roles of hundreds of UK content moderators, despite the implementation of stricter regulations aimed at curbing the dissemination of harmful materials online.
The popular video-sharing platform announced that hundreds of positions within its trust and safety teams could be impacted in the UK, as well as South and Southeast Asia, as part of a global reorganization effort.
Their responsibilities have been shifted to other European locations and third-party contractors, with some trust and safety roles still remaining in the UK, the company clarified.
This move aligns with TikTok’s broader strategy to utilize artificial intelligence for content moderation. The company stated that over 85% of materials removed for violating community guidelines have been identified and deleted through automation.
The reduction poses challenges for companies, necessitating age verification checks for users accessing potentially harmful content, even with new UK online safety laws now in effect. Organizations risk fines of up to £18 million or 10% of global revenue for non-compliance.
John Chadfield from the Communication Workers Union expressed concerns that replacing human moderators with AI could endanger the safety of millions of TikTok users.
“TikTok employees have consistently highlighted the real-world implications of minimizing human moderation teams in favor of hastily developed AI solutions,” he remarked.
TikTok, which is owned by the Chinese tech firm ByteDance, has a workforce of over 2,500 in the UK.
In the past year, TikTok has decreased its trust and safety personnel globally, often substituting automated systems for human workers. In September, the company laid off an entire team of 300 content moderators in the Netherlands, and in October, it disclosed plans to replace approximately 500 content moderation staff in Malaysia as part of its shift towards AI.
Recently, TikTok employees in Germany conducted a strike against the layoffs in its trust and safety team.
Meanwhile, TikTok’s business is thriving. Accounts filed with Companies House reveal that combined operations in the UK and Europe reached $6.3 billion (£4.7 billion) in 2024, representing a 38% increase from the year before. The operating loss decreased from $1.4 billion in 2023 to $485 million.
A TikTok spokesperson stated that the company is “continuing the reorganization initiated last year to enhance its global operational model for reliability and safety.” This involves a focus on fewer global locations to increase efficiency and speed in the evolution of this essential function for technological progress.
Research conducted among English children has revealed a rise in exposure to pornography following the implementation of UK regulations intended to safeguard them online, with six-year-olds encountering it inadvertently.
Dame Rachel de Souza reported that the findings indicated an uptick in the number of young people encountering pornographic content before turning 18, even after the Online Safety Law came into effect.
Over a quarter (27%) admitted to having viewed porn online by the age of 11.
These results build on a similar survey carried out by the Children’s Commissioner in 2023, highlighting minimal progress despite newly instituted laws and commitments from government officials and tech companies.
She stated: “Violent pornography is readily accessible to children, often encountered accidentally via popular social media platforms, and has a profound impact on their behaviors and views.
“This report should signal a clear turning point. The fresh protections introduced in July by Ofcom, part of the Online Safety Act, present a genuine opportunity to prioritize child safety unequivocally in the online space.”
The findings stem from a representative national survey conducted in May with 1,010 children and young people aged 16-21, just prior to the implementation of the OFCOM child code in July.
The regulations set forth by Ofcom have brought significant changes designed to restrict access to pornographic websites for those under 18. Utilizing the same methodology and questions as in the 2023 survey ensures consistency:
A higher percentage of young people reported seeing porn before age 18 (70%) in 2025 compared to 2023 (64%).
More than a quarter (27%) acknowledged viewing porn online at age 11, with the average age of first exposure remaining at 13.
Vulnerable children, including those receiving free school lunches, children in social care, and those with special educational needs or disabilities, reported higher rates of exposure to online porn by age 11 compared to their peers.
Nearly half of the respondents (44%) agreed with the statement: “Girls might say no at first, but then they could be persuaded to have sex.” Further analysis showed that 54% of girls and 41% of boys who had viewed porn online resonated with this sentiment, in contrast to 46% of girls and 30% of boys who hadn’t.
A significant number of respondents indicated they encountered porn online accidentally rather than actively seeking it (35%). The rate of accidental exposure rose by 21 percentage points compared to 2023 (59% vs. 38%).
Social networking and media platforms constituted 80% of the primary sources of porn access for children, with X (formerly Twitter) being the most common portal, surpassing dedicated porn sites.
The disparity between the number of children viewing porn on X versus dedicated porn sites has widened (45% vs. 35% in 2025 compared to 41% vs. 37% in 2023).
Most respondents reported witnessing portrayals of actions which are illegal under existing pornography legislation or could be deemed illegal under forthcoming crimes and police bills.
Over half (58%) encountered pornographic content that depicted strangulation, with 44% observing sexual activity while individuals were asleep, and 36% witnessing instances where consent was not given or had been ignored.
Further scrutiny revealed that only a minority of children expressed a desire for violent or extreme content, indicating it is being made available to them.
The report highlights concerns that, even under current regulations, children may circumvent restrictions by utilizing virtual private networks (VPNs), which remain legal in the UK.
The report advocates for online porn to adhere to the same standards as offline porn, prohibiting depictions of non-fatal violence. It also calls for the Ministry of Education to equip schools to effectively implement new curricula on relationships, health, and sex education.
Recently, it was announced that traffic to the UK’s leading porn sites has drastically decreased following the strengthening of age verification measures. According to data analytics firm Simarweb, the popular adult site Pornhub saw a decline of over 1 million visitors within just two weeks.
Pornhub and other major adult platforms initiated enhanced age verification checks on July 25 after acknowledging that online safety laws should complicate access to explicit materials for individuals under 18.
Simarweb compared the average daily user statistics of porn sites from August 1 to 9 against the average from July, revealing that Pornhub, the UK’s top adult content site, experienced a 47% dip in domestic traffic on July 24, the day before the new regulations came into effect.
A government spokesperson remarked, “Children are growing up immersed in a digital landscape bombarded with pornography and harmful content, which can have damaging effects on their lives. Online safety laws are addressing this issue.”
“To be clear: VPNs are legitimate tools for adults, and there are no intentions to ban them. However, platforms promoting loopholes like VPNs to children could face stringent enforcement and hefty fines. We mustn’t prioritize business interests over child safety.”
b
Etsy Lerner may not view herself as a TikTok star, but the New York Times labels her as one, even calling her an influencer. To her, it signifies payment and illicit goods — all she possesses is a free pen. “I genuinely do it for myself,” she states, “and for those who follow me.”
Lerner is 64 years young. She spent over two decades as a literary agent, representing authors like Patti Smith and Temple Grandin. A non-fiction writer, she is the author of her debut novels, “The Shred Sisters” and “Love Letter to Loneliness.” However, her TikTok presence is noteworthy, boasting 1.5 million followers! There, she shares videos reading from a diary chronicling her chaotic 20s.
“I don’t know who you love, who loves you, what you do for your job, what your purpose is,” she expresses in one post. “This morning I stumbled upon a line in my journal. In my 20s, I wrote: ‘I feel like I don’t know who I am.'”
Lerner shares posts while in a dressing gown and without makeup. Initially, she ventured into BookTok to support authors, but as her new novel was approaching release, she started filming herself on camera, despite not initially gaining followers. “A friend advised me to embrace it like my own TV channel… so I thought, ‘I’ll read from an old diary.'”
She has kept journals since the age of 11, inspired by Anne Frank’s “The Diary of a Young Girl.” “I penned my first poem there, trying to understand myself…” Although her journal from ages 12 to 18 was lost when her car was stolen, she has roughly 30 volumes from her 20s safely stored away in her attic.
“My journals are incredibly melancholic. They discuss loneliness, the search for love and friendship, and the quest for identity,” she reflects.
Lerner describes herself as a “slow bloomer.” Accepted into Columbia’s MFA Poetry Program at 26, she entered the publishing world in her late 20s, a time when most editorial assistants were fresh college graduates. “I didn’t experience love until I was 30 and lacked any significant relationships… I lost much of my teens and endured depression through most of my 20s.”
“It’s all about connecting and trying to communicate,” … Betsy Lerner captured in New Haven, Connecticut.
Photo: Nicole Frapie/Guardian
At 15, her parents took her to a psychiatrist, which led to a diagnosis of bipolar disorder. “I resisted accepting that I had this condition. I fought against it for a long time,” she admits. Her 2003 memoir, Food and Loathing, recounts her relationship with weight, food, and depression, detailing one instance in her late 20s when she found herself standing on a bridge over the Hudson River.
A breakthrough occurred at 30 when she connected with a psychopharmacologist who could prescribe the right dosage of lithium (they have collaborated for 35 years). She also got married.
Writing in her journal became less frequent. While she initially wrote at night in bed, “I wasn’t feeling so sad and lonely anymore,” she reflects.
Over the years, Lerner says, “I was instinctively drawn to strength.” Currently, she prioritizes stability above all else.
She had no intentions of writing a novel. Nevertheless, in 2019, she faced “the tragic loss of four” individuals: her mother, two teenagers, Ruby and Hart Campbell, who were tragically killed by a drunk driver, and her best friend, author George Hodgman, who died by suicide. “I still grapple with the idea of grieving everyone, all the time,” she shares.
Following these losses, she began writing “Shred Sisters.” The novel serves as a means for her and her two sisters to care for one another while navigating their grief. She has also written another novel and continues to share insights from her diary as long as inspiration strikes. “It’s all about connecting and communicating,” she affirms.
“There’s a constant flow of comments from young adults in their 20s who resonate with my struggles. That connection motivates me immensely. I feel aligned with these young individuals.”
Shred Sisters is published by Verve Books. To support the Guardian, please order a copy from the Guardian Bookshop. Shipping fees may apply.
Tell me: Did your life take a new turn after turning 60?
Angela Rayner has stated that Nigel Farage has “failed a generation of young women” with his plan to abolish online safety laws, claiming it could lead to an increase in “revenge porn.”
The Deputy Prime Minister’s remarks are the latest in a series of criticisms directed at Farage by the government, as Labour launches a barrage of attack ads targeting British reform leaders, including one featuring Farage alongside influencer Andrew Tate.
During a press conference last month, reform leaders announced initiatives that encourage social media companies to restrict misleading and harmful content, vowing not to promote censorship and avoiding the portrayal of the UK as a “borderline dystopian state.”
In retaliation, Science and Technology Secretary Peter Kyle accused Farage of siding with child abusers like Jimmy Savile, prompting a strong backlash from reform leaders.
In comments made to the Sunday Telegraph, Rayner underscored the risks associated with abolishing the act, which addresses what is officially known as intimate image abuse.
“We recognize that the abuse of intimate images is an atrocity, fostering a misogynistic culture on social media, which also spills over into real life,” Rayner articulated in the article.
“Nigel Farage poses a threat to a generation of young women with his dangerous and reckless plans to eliminate online safety laws. The absence of a viable alternative to abolish safety measures and combat the forthcoming flood of abuse reveals a severe neglect of responsibility.”
“It’s time for Farage to explain to British women and girls how he intends to ensure their safety online.”
Labour has rolled out a series of interconnected online ads targeting Farage. An ad launched on Sunday morning linked directly to Rayner’s remarks, asserting, “Nigel Farage wants to make it easier to share revenge porn online,” accompanied by a laughing image of Farage.
According to the Sunday Times, another ad draws attention to Farage’s comments regarding Tate, an influencer facing serious allegations in the UK, including rape and human trafficking, alongside his brother Tristan.
Both the American-British brothers are currently under investigation in Romania and assert their innocence against numerous allegations.
Labour’s ads depict Farage alongside Andrew Tate with the caption “Nigel Farage calls Andrew Tate an ‘important voice’ for men,” referencing remarks made during an interview on last year’s Strike IT Big podcast.
Lila Cunningham, a former magistrate involved in the reform, wrote an article for the Telegraph on Saturday, labeling the online safety law as “censorship law” and pointed out that existing laws already address “revenge porn.”
“This law serves as a guise for censorship, providing a pretext to empower unchecked regulators and to silence dissenting views,” Cunningham claimed.
Cunningham also criticized the government’s focus on accommodating asylum seekers in hotels, emphasizing that it puts women at risk and diverting attention from more pressing concerns.
Since the implementation of stringent age verification measures last month, visits to popular adult websites in the UK have seen a significant decline, according to recent data.
Daily traffic to PornHub, the most frequented porn site in the UK, dropped by 47%, from 3.6 million on July 24 to 1.9 million on August 8.
Data from digital market intelligence firm Sircerweb indicates that the next popular platforms, Xvideos and Xhamster, also experienced declines of 47% and 39% during the same period.
As reported initially by the Financial Times, this downturn seems to reflect the enforcement of strict age verification rules commencing on July 25 under the Online Safety Act. However, social media platforms implementing similar age checks for age-restricted materials, like X and Reddit, did not experience similar traffic declines.
A representative from Pornhub remarked, “As we have observed in various regions globally, compliant sites often see a decrease in traffic, while non-compliant ones may see an increase.”
The Online Safety Act aims to shield children from harmful online content, mandating that any site or app providing pornographic material must prevent access by minors.
Ofcom, the overseeing body for this law in the UK, endorses age verification methods such as: verifying age via credit card providers, banks, or mobile network operators; matching photo ID with a live selfie; or using a “digital identity wallet” for age verification.
Additionally, the law requires platforms to block access to content that could be harmful to children, including materials that incite self-harm or promote dangerous behaviors, which has sparked tension over concerns of excessive regulation.
Ofcom contends that the law does not infringe upon freedom of expression, highlighting clauses intended to protect free speech. Non-compliance can lead to penalties ranging from formal warnings to fines amounting to 10% of global revenue, with serious violations potentially resulting in websites being blocked in the UK.
Nigel Farage’s Reform British Party has vowed to repeal the act following the age verification requirement, igniting a heated exchange where the technology secretary, Peter Kyle, was accused by Farage of making inappropriate comments.
The implementation of age checks has accordingly led to a surge in virtual private network (VPN) downloads, as users seek to circumvent national restrictions on certain websites. VPN applications frequently dominate the top five spots in Apple’s App Store.
The way couples first connect can influence their relationship quality
Good face/interpretation
A global study involving 50 countries reveals that individuals who meet their partners online report lower relationship satisfaction and less emotional connection compared to those who meet in person initially.
The rise of the internet has transformed relationship dynamics. For instance, while in the mid-20th century, heterosexual couples typically met through mutual friends, by the early 21st century, this trend shifted to online interactions as primary.
To explore how these changes impact relationship quality, Malta Kowal from the University of Wroclaw, Poland, and her team studied 6,646 individuals in heterosexual relationships across all continents except Antarctica.
Participants were asked whether they started their relationship online and to rate their satisfaction levels. Additionally, they were assessed on emotional intimacy (how well they feel understood by their partner), passion, and commitment (including whether they view their relationship as long-term).
Those who met their partners online scored an average of 4.20 out of 5 on the relationship satisfaction scale, whereas those who met offline scored 4.28—indicating a small but statistically significant difference. Online couples reported lower scores in intimacy, passion, and commitment.
According to Kowal, several factors might contribute to this disparity. Research suggests that partners who meet online often have less in common in terms of educational background and ethnicity compared to those who meet in person. Kowal and her collaborators propose that this might lead to differences in their everyday lives and shared values.
Kowal also points out the issue of “Choice Overload.” With dating platforms presenting numerous options, individuals may second-guess their choices, which can ultimately diminish satisfaction.
Moreover, she notes that some people tend to misrepresent themselves in online dating profiles. “You might see someone and think, ‘No way is he two meters tall; he’s more like 170 centimeters,'” Kowal explains. This kind of disparity can negatively impact relationship satisfaction.
Luke Brunning from the University of Leeds in the UK finds this research “fascinating” and “valuable” for future studies, particularly in considering how online dating may redefine relationship approaches or if shifting attitudes toward commitment drive these changes.
He further suggests that the overall difference between couples who meet online and offline is “relatively small.”
Wikimedia operators have received approval from a High Court judge to contest the online safety legislation when deemed a high-risk platform, which imposes the most stringent requirements.
The Wikimedia Foundation warns that if OFCOM classifies it as a Category 1 provider later this summer, it will be compelled to limit access to the site in order to meet regulatory standards.
As a nonprofit entity, the organization stated it “faces significant challenges in addressing the substantial technical and staffing demands” required to adhere to its obligations, which include user verification, stringent user protection measures, and regular reporting responsibilities to mitigate the spread of harmful content.
The Wikimedia Foundation estimates that to avoid being categorized as a Category 1 service, the number of UK users accessing Wikipedia would need to decrease by approximately three-quarters.
Wikipedia asserts it is unlike other platforms expected to be classified as Category 1 providers, such as Facebook and Instagram, due to its charitable nature and the fact that users typically interact only with content that interests them.
Judge Johnson declined to challenge Wikipedia’s status in court for various reasons but emphasized that the site “offers tremendous value for freedom of speech and expression,” noting that the verdict would not provide Ofcom or the government a mandate to impose regulations that would severely limit Wikipedia’s operations.
He stated that the classification of Wikipedia as a Category 1 provider “must be justified as proportionate if it does not infringe upon the right to freedom of expression,” but added that it was “premature” to enforce such a classification as Ofcom had not yet determined it to be a Category 1 service.
Should Ofcom deem Wikipedia a Category 1 service, which would jeopardize its current operations, Johnson suggested that technology secretary Peter Kyle “should consider altering the regulations or exempting this category of services from the law,” highlighting that Wikipedia could confront further challenges if this were not addressed.
“We are pleased to report that we are actively engaging with the Wikimedia Foundation,” said Phil Brad Leishmieg, lead attorney for the organization. “While the ruling does not provide immediate legal protection for Wikipedia as we had sought, it accentuates the responsibilities facing Ofcom and the UK government regarding the implementation of the Online Safety Act.”
“The judge has recognized the issues caused by the misalignment of OSA classifications and obligations concerning Wikipedia’s ‘significant value, user safety, and the human rights of Wikipedia volunteer contributors.’
Government KC Cecilia Aibimee stated that the minister has taken OFCOM’s guidance into account, specifically considering whether Wikipedia should be exempt from the regulations, but ultimately decided against it. She remarked that Wikipedia was deemed “in principle an appropriate service necessitating Category 1 obligations,” and that the reasoning behind this decision was “neither unreasonable nor without justification.”
A government representative commented:“We are pleased with today’s High Court ruling. This will assist us in our ongoing efforts to implement online safety laws and foster a safer online environment for all.”
Tech firms like Snapchat and Facebook disclosed over 9,600 instances of adults grooming children online within a mere six months last year, averaging around 400 cases weekly.
Law enforcement agencies, such as the FBI and the UK’s National Crime Agency (NCA), are increasingly alarmed by the rising threats posed by various crimes targeting minors.
In 2023, the U.S.-based National Center for Missing and Exploited Children (NCMEC) documented 546,000 reports concerning children from high-tech companies globally.
Of these, approximately 9,600 reports originated from the UK during the first half of 2024. Records indicate that Snapchat reported significantly more distressing content to NCMEC than any other platform during this timeframe.
The NSPCC, a child welfare charity, termed the statistic “shocking,” suggesting that it is likely an underrepresentation.
The NCA is launching an “unprecedented” campaign in the UK aimed at informing teachers, parents, and children about the perils of sexual exploitation.
The NCA emphasized: “Sextortion is a cruel crime that can lead to devastating outcomes for victims. Tragically, teenagers in the UK and worldwide have taken their lives as a result.”
NCMEC’s data is crucial as it is derived from reports submitted by online platforms and internet providers—such as Snapchat, Instagram, and TikTok—rather than from victims, who may feel hesitant to disclose their abuse.
High-tech companies are mandated by U.S. law to report suspicious content to NCMEC. The data indicates that Snapchat reported around 20,000 instances of concerning materials in the first half of 2023, which included instances of sextortion and Child Sexual Abuse Materials.
This number surpasses the combined total of reports submitted by Facebook, Instagram, TikTok, X (formerly Twitter), Google, and Discord. Snapchat revised its policy on reporting such content last year, which is believed to have resulted in lower subsequent figures.
Rani Govender from NSPCC remarked that sextortion and other profit-driven sexual offenses have a profoundly “devastating” impact on young individuals, hindering their ability to seek help and, in some cases, leading to suicide.
NCMEC revealed that they are aware of “more than three dozen” teenage boys globally who have taken their lives after falling victim to sextortion since 2021.
Govender noted that some tech companies “misjudge the abuse occurring online” by implementing protections like end-to-end encryption.
In contrast to certain other platforms, Snapchat does not employ end-to-end encryption for text-based messaging.
Authorities are increasingly worried that predators are utilizing more sophisticated methods to target children online.
The Guardian has uncovered a 101-page manual that provides detailed instructions on how to exploit young internet users, including recommendations for effective mobile phones, encryption, apps, and manipulative tactics.
This document instructs users on how to ensnare victims as “modern slaves” by obtaining explicit images, followed by coercive demands.
The guide is purportedly authored by a 20-year-old individual named Baron Martin from Arizona, USA. Arrested by the FBI in December, he refers to himself as the “king of terror.” According to the U.S. Department of Justice, Martin was a “catalyst for widespread control.”
Researchers report that the sextortion manual has been circulated among numerous “com networks”—an online community that promotes sadistic and misogynistic material while encouraging criminal behavior.
Milo Comerford, a strategic dialogue researcher at the ISD think tank, stated:
The FBI has pinpointed numerous online gangs collaborating to identify and exploit vulnerable victims, targeting them with compromising romantic interests.
These strategies are then used to blackmail victims, often resulting in further explicit imagery, self-harm, and other acts of violence and animal cruelty.
Comerford emphasized that “robust multi-agency” measures are urgently needed to raise awareness about the risks of sextortion among young people, parents, guardians, teachers, and others.
He added, “These transnational networks operate within a constantly shifting landscape of victims, groomers, and abusive entities utilizing social media platforms, sometimes leading to mass violence.”
Both Snapchat and Facebook have been requested to provide comments on this matter.
wHeng Min* discovered a concealed camera in her bedroom, initially hoping for a benign explanation, suspecting her boyfriend might have set it up to capture memories of their “happy life” together. However, that hope quickly morphed into fear as she realized her boyfriend had been secretly taking sexually exploitative photos of her and her female friends, as well as other women in various locations. They even used AI technology to create pornographic images of them.
When Ming confronted him, he begged for forgiveness but became angered when she refused to reconcile. I said to a Chinese news outlet, Jimu News.
Ming is not alone; many women in China have fallen victim to voyeuristic filming in both private and public spaces, including restrooms. Such images are often shared or sold online without consent. Sexually explicit photos, frequently captured via pinhole cameras hidden in everyday objects, are disseminated in large online groups.
This scandal has stirred unrest in China, raising concerns about the government’s capability and willingness to address such misconduct.
A notable group on Telegram, an encrypted messaging app, is the “Maskpark Tree Hole Forum,” which reportedly boasted over 100,000 members, mostly male.
“The Mask Park incident highlights the extreme vulnerability of Chinese women in the digital realm,” stated Li Maizi, a prominent Chinese feminist based in New York, to the Guardian.
“What’s more disturbing is the frequency of perpetrators who are known to their victims: committing sexual violence against partners, boyfriends, and even minors.”
The scandal ignited outrage on Chinese social media, stirring discussions about the difficulties of combating online harassment in the nation. While Chinese regulators are equipped to impose stricter measures against online sexual harassment and abuse, their current focus appears to prioritize suppressing politically sensitive information, according to Eric Liu, a former content moderator for Chinese social media platforms and present editor of the Digital Times based in the US.
Since the scandal emerged, Li has observed “widespread” censorship concerning the Mask Park incident on Chinese internet. Posts with potential social impact, especially those related to feminism, are frequently subject to censorship.
“If the Chinese government had the will, they could undoubtedly shut down the group,” Li noted. “The scale of [MaskPark] is significant. Cases of this magnitude have not gone unchecked in recent years.”
Nevertheless, Li expressed that he is not surprised. “Such content has always existed on the Chinese internet.”
In China, individuals found guilty of disseminating pornographic material can face up to two years in prison, while those who capture images without consent may be detained for up to ten days and fined. The country also has laws designed to protect against sexual harassment, domestic violence, and cyberbullying.
However, advocates argue that the existing legal framework falls short. Victims often find themselves needing to gather evidence to substantiate their claims, as explained by Xirui*, a Beijing-based lawyer specializing in gender-based violence cases.
“Certain elements must be met for an action to be classified as a crime, such as a specific number of clicks and subjective intent,” Xirui elaborated.
“Additionally, there’s a limitation on public safety lawsuits where the statute of limitations is only six months, after which the police typically will not pursue the case.”
The Guardian contacted China’s Foreign Ministry for a statement.
Beyond legal constraints, victims of sexual offenses often grapple with shame, which hinders many from coming forward.
“There have been similar cases where landlords set up cameras to spy on female tenants. Typically, these situations are treated as privacy violations, which may lead to controlled detention, while victims seek civil compensation,” explained Xirui.
To address these issues, the government could strengthen specialized laws, enhance gender-based training for law enforcement personnel, and encourage courts to provide guidance with examples of pertinent cases, as recommended by legal experts.
For Li, the recent occurrences reflect a pervasive tolerance for and lack of effective law enforcement regarding these issues in China. Instead of prioritizing the fight against sexist and abusive content online, authorities seem more focused on detaining female writers involved in homoerotic fiction and censoring victims of digital abuse.
“The rise of deepfake technology and the swift online distribution of poorly filmed content have rendered women’s bodies digitally accessible on an unparalleled scale,” stated Li. “However, if authorities truly wish to address these crimes, it is entirely feasible to track and prosecute them, provided they invest the necessary resources and hold the Chinese government accountable.”
*Name changed
Additional research by Lillian Yang and Jason Tang Lu
The UK’s new online safety laws are generating considerable attention. As worries intensify about the accessibility of harmful online content, regulations have been instituted to hold social media platforms accountable.
However, just days after their implementation, novel strategies for ensuring children’s safety online have sparked discussions in both the UK and the US.
Recently, Nigel Farage, leader of the Populist Reformed British Party, found himself in a heated exchange with the government’s Minister of Labour after announcing his intent to repeal the law.
In parallel, Republicans convened with British lawmakers and the communications regulator Ofcom. The ramifications of the new law are also keenly observed in Australia, where plans are afoot to prohibit social media usage for those under 16.
Experts note that the law embodies a tension between swiftly eliminating harmful content and preserving freedom of speech.
Senior Reformer Zia Yusuf stated:
Responding to criticisms of UK legislation, technical secretary Peter Kyle remarked, “If individuals like Jimmy Saville were alive today, they would still commit crimes online, and Nigel Farage claims to be on their side.”
Kyle referred to measures in the law that would help shield children from grooming via messaging apps. Farage condemned the technical secretary’s comments as “unpleasant” and demanded an apology, which is unlikely to be forthcoming.
“It’s below the belt to suggest they’ll do anything to assist individuals like Jimmy Saville while causing harm,” Farage added.
The UK’s rights are not the only concerns raised about the law. US Vice President JD Vance claimed that freedom of speech in the UK is “retreating.” Last week, Republican Rep. Jim Jordan, who criticized the legislation, led a group of US lawmakers in discussions with Kyle and Ofcom regarding the law.
Jordan labeled the law as “UK online censorship legislation” and criticized Ofcom for imposing regulations that “target” and “harass” American companies. A bipartisan delegation also visited Brussels to explore the Digital Services Act, the EU’s counterpart to the online safety law.
Scott Fitzgerald, a Republican member of the delegation, noted the White House would be keen to hear the group’s findings.
Worries from the Trump administration have even led to threats against OFCOM and EU personnel concerning visa restrictions. In May, the State Department announced it would block entry to the US for “foreigners censoring Americans.” Ofcom has expressed a desire for “clarity” regarding planned visa restrictions.
The intersection of free speech concerns with economic interests is notable. Major tech platforms including Google, YouTube, Facebook, Instagram, WhatsApp, Snapchat, and X are all based in the US and may face fines of up to £18 million or 10% of global revenue for violations. For Meta, the parent company of Instagram, Facebook, and WhatsApp, this could result in fines reaching $16 billion (£11 billion).
On Friday, X, the social media platform owned by self-proclaimed free speech advocate Elon Musk, issued a statement opposing the law, warning that it could “seriously infringe” on free speech.
Signs of public backlash are evident in the UK. A petition calling for the law’s repeal has garnered over 480,000 signatures, making it eligible for consideration in Congress, and was shared on social media by far-right activist Tommy Robinson.
Tim Bale, a political professor at Queen Mary University in London, is skeptical about the law being a major voting issue.
“No petition or protest has significant traction for most people. While this resonates strongly with those online—on both the right and left—it won’t sway a large portion of the general populace,” he said.
According to a recent Ipsos Mori poll, three out of four UK parents are worried about their children’s online activities.
Beavan Kidron, a British fellow and prominent advocate for online child safety, shared with the Guardian that he is “more than willing to engage Nigel Farage and his colleagues on this issue.”
“If companies focus on targeting algorithms toward children, why would reforms place them in the hands of Big Tech?”
The UK’s new Under-18 guidelines, which prompted the latest legislation, mandate age verification on adult sites to prevent underage access. However, there are also measures to protect children from content that endorses suicide, self-harm, and eating disorders, as well as curtail the circulation of materials that incite hatred or promote harmful substances and dangerous challenges.
Some content falls within age appropriateness to avoid being flagged as violating these regulations. In an article by the Daily Telegraph, Farage alleged that footage of anti-immigrant protests was not only “censored” but also related to the Rotherham Grooming Gang scandal.
These instances were observed on X, which flagged a speech by Conservative MP Katie Lamb regarding the UK’s child grooming scandal. The content was labeled with a notice stating, “local laws temporarily restrict access to this content until X verifies the user’s age.” The Guardian could not access the Age Verification Service on X, suggesting that, until age checks are fully operational, the platform defaults many users to a child-friendly experience.
X was contacted for commentary regarding age checks.
On Reddit, the Alcohol Abuse Forum and the Pet Care subforum will implement age checks before granting access. A Reddit spokesperson confirmed that this age check is enforced under the online safety law to limit content that is illegal or harmful to users under the age of 18.
Big Brother Watch, an organization focused on civil liberties and privacy, noted that examples from Reddit and X exemplify the overreach of new legislation.
An Ofcom representative stated that the law aims to protect children from harmful and criminal content while simultaneously safeguarding free speech. “There is no necessity to limit legal content accessible to adult users.”
Mark Jones, a partner at London-based law firm Payne Hicks Beach, cautioned that social media platforms might overly censor legitimate content due to compliance concerns, jeopardizing their obligations to remove illegal material or content detrimental to children.
He added that the regulations surrounding Ofcom’s content handling are likely to manifest as actionable and enforceable due to the pressure to quickly address harmful content while respecting freedom of speech principles.
“To effectively curb the spread of harmful or illegal content, decisions must be made promptly; however, the urgency can lead to incorrect choices. Such is the reality we face.
The latest initiatives from the online safety law are only the beginning.
Elon Musk’s platform, X, has warned that the UK’s Online Safety Act (OSA) may “seriously infringe” on free speech due to its measures aimed at shielding children from harmful content.
The social media company noted that the law’s ostensibly protective aims are marred by the aggressive enforcement tactics of Communications Watchdog Ofcom.
In a statement shared on its platform, X remarked: “Many individuals are worried that initiatives designed to safeguard children could lead to significant violations of their freedom of expression.”
It further stated that the UK government was likely aware of the risks, having made “conscious decisions” to enhance censorship under the guise of “online safety.”
“It is reasonable to question if British citizens are also aware of the trade-offs being made,” the statement added.
The law, a point of contention politically on both sides of the Atlantic, is facing renewed scrutiny following the implementation of new restrictions on July 25th regarding access to pornography for those under 18 and content deemed harmful to minors.
Musk, who owns X, labeled the law as an “oppression of people” shortly after the enactment of the new rules. He also retweeted a petition advocating for the repeal of the law, which has garnered over 450,000 signatures.
X found itself compelled to establish age restrictions for certain content. In response, the Reformed British Party joined the outcry, pledging to abolish the act. This commitment led British technology secretary Peter Kyle to accuse Nigel Farage of aligning himself with pedophile Jimmy Saville, prompting Farage to describe the comments as “under the belt” and deserving of an apology.
Regarding Ofcom, X claimed that the regulators are employing “heavy-handed” tactics in implementing the act, characterized by “a rapid increase in enforcement resources” and “additional layers of bureaucratic surveillance.”
The statement warned: “The commendable intentions of this law risk being overshadowed by the expansiveness of its regulatory scope. A more balanced and collaborative approach is essential to prevent undermining free speech.”
While X aims to comply with the law, the threat of enforcement and penalties—potentially reaching 10% of global sales for social media platforms like X—could lead to increased censorship of legitimate content to avoid repercussions.
The statement also referred to plans for a National Internet Intelligence Research Team intended to monitor social media for indications of anti-migrant sentiments. While X suggested the proposal could be framed as a safety measure, it asserted that it “clearly extends far beyond that intention.”
“This development has raised alarms among free speech advocates, who characterize it as excessively restrictive. A balanced approach is essential for safeguarding individual freedoms, fostering innovation, and protecting children.”
A representative from Ofcom stated that the OSA includes provisions to uphold free speech.
They asserted: “Technology companies must address criminal content and ensure children do not access defined types of harmful material without needing to restrict legal content for adult users.”
The UK Department of Science, Innovation and Technology has been approached for comment.
Anger has surged on Chinese social media following reports of online groups, reportedly comprising hundreds of thousands of men sharing unauthorized photos of women, including explicit images.
A report published last week by Southern Metropolis outlined a group on the encrypted messaging app Telegram, known as the “Mask Park Tree Hall Forum.” This group boasts over 100,000 members and claims to be “exclusively composed of Chinese men.”
These individuals allegedly circulated sexually explicit images of women, captured either in private settings or through hidden cameras disguised as everyday objects such as plug sockets and shoes.
The incident has drawn parallels to South Korea’s “nth room” scandal, where women were coerced into sharing explicit photos within a Telegram group.
While Telegram is blocked in China, users can still access it via a virtual private network (VPN) that bypasses location restrictions.
The hashtag linked to the scandal had garnered over 110 million views on Weibo by Thursday. However, there are signs of censorship, as some related searches yield results indicating, “According to relevant laws and regulations, this content cannot be viewed.” Earlier reports from Reuters noted the hashtag received over 270 million views.
“Women’s lives are not a male erotic novel,” commented one user on Xiaohongshu, a platform similar to Instagram.
Another user on Xiaohongshu remarked:
In South Korea, the leader of the chat group received a sentence of 40 years in prison.
In China, those who photograph individuals without consent face penalties of up to 10 days of detention and a fine of 500 yuan (£53). Disseminating pornographic material can lead to prison sentences of up to two years.
The Mask Park scandal isn’t an isolated incident; last year, a tech company owner in Beijing was found to have secretly recorded over 10,000 videos of female employees in the bathroom, receiving only a 10-day detention as punishment. “Ten days are merely encouragement,” remarked one Weibo user.
Criminal law professor Lao Dongyan from Tsinghua University stated on Weibo that Chinese law treats unauthorized filming as an indecent crime, rather than a violation of women’s rights.
“Women who are secretly filmed are the primary victims. Reducing their experiences to indecency material is equivalent to categorizing them as participants in pornographic content, which is absurd,” Rao commented.
As authorities continue to limit civil discourse and behaviors, addressing feminism and women’s rights in China becomes increasingly challenging. Nonetheless, some women have discovered ways to counteract misogyny publicly, including through comedy.
In a recent episode of the popular stand-up show The King of Standup Comedy, comedian Huang Yijin humorously mentioned putting on makeup alone in his hotel room.
Recent statistics indicate that since the implementation of age verification for pornographic websites, the UK is conducting an additional five million online age checks daily.
The Association of Age Verification Providers (AVPA) reported a significant increase in age checks across the UK since Friday, coinciding with the enforcement of mandatory age verification under the Online Safety Act.
“We are thrilled to assist you in maximizing your business potential,” remarked Iain Corby, executive director of AVPA.
In the UK, the use of virtual private networks (VPNs), which allow users to bypass restrictions on blocked sites, is rapidly increasing as they mask users’ actual locations. Four of the top five free applications in the UK Apple Download Store are VPNs, with popular provider Proton reporting an astonishing 1,800% surge in downloads.
Last week, Ofcom, the UK communications regulator, indicated it may initiate a formal inquiry into the inadequate age checks reported this week. Ofcom stated it will actively monitor compliance with age verification requirements and may investigate specific services as needed.
AVPA, the industry association representing UK age verification companies, has been assessing the checks performed on UK porn providers, which were mandated to implement “very effective” age verification by July 25th.
Companies that verified ages were instructed to report “the number of checks conducted today for a very effective age guarantee.”
While the AVPA stated it couldn’t provide a baseline for comparison, it noted that effective age verification measures are newly introduced to dedicated UK porn sites, which previously only required a confirmation check for age.
An Ofcom spokesperson said: “Until now, children could easily stumble upon pornographic and other online content without seeking it out. Age checks are essential to prevent that. We must ensure platforms are adhering to these requirements and anticipate enforcement actions against non-compliant companies.”
Ofcom stresses that service providers should not promote the use of VPNs to circumvent age management.
Penalties for breaching online safety regulations, including insufficient age verification processes, can range from 10% of global revenue to complete blockage of the site’s access in severe cases.
Age verification methods endorsed by OFCOM and utilized by AVPA members include facial age estimation, which analyses a person’s age via live photos and videos; verification through credit card providers, banks, or mobile network operators; photo ID matching, where a user’s ID is compared to a selfie; and a “digital identity wallet” containing age verification proof.
Prominent pornographic platforms, including Pornhub, the UK’s leading porn site, have pledged to adopt the stringent age verification measures mandated by the Act.
The law compels sites and applications to protect children from various harmful content, specifically material that encourages suicide, self-harm, and eating disorders. Advanced platforms must also take action to prevent the dissemination of abusive content targeting individuals with characteristics protected under equality laws, such as age, race, and gender.
Free speech advocates argue that the restrictions on child-related content have caused the classification of X-rated materials to age unnecessarily, along with several Reddit forums dedicated to discussions around alcohol abuse.
Reddit and X have been approached for their feedback.
Media organizations have been alerted to the potential “devastating impacts” on their digital audiences as AI-generated summaries start to replace traditional search results.
The integration of Google’s AI summarization is causing major concern among media proprietors, as it utilizes blocks of text to condense search results. Some perceive this as a fundamental threat to organizations that rely on search traffic.
AI summaries can offer all the information users seek without necessitating a click on the original source, while links to traditional search results are relegated further down the page, thereby decreasing user traffic.
An analysis by the Authoritas Analytics Company indicates that websites previously ranked at the top of search results may experience around a 79% decrease in traffic for specific queries when results are presented through AI summaries.
The study also highlighted that links to YouTube, owned by Google’s parent company Alphabet, are more prominent than traditional search results. This investigation is part of a legal challenge against the UK’s competition regulator concerning the implications of Google’s AI summarization.
In a statement, a Google representative described the study as being “based on inaccurate and flawed assumptions and analysis,” citing a set of searches that does not accurately reflect all queries and results in outdated estimates regarding news website traffic.
“Users are attracted to AI-driven experiences, and AI features in search enable them to pose more questions, creating new avenues for discovering websites,” the spokesperson stated. “We consistently direct billions of clicks to our websites daily and do not observe a significant decline in overall web traffic, as suggested.”
A secondary survey revealed a substantial decline in referral traffic stemming from Google’s AI overview. A month-long study conducted by the US Think tank Pew Research Center found that users clicked on a link under the AI summary only once for every 100 searches.
A Google spokesperson noted that this study employed “a distorted query set that illustrates flawed methodologies and search traffic.”
Senior executives in news organizations claim that Google has consistently declined to share the necessary data to assess the impact of AI summaries.
Quick Guide
Please contact us about this story
show
The best public interest journalism relies on direct accounts from knowledgeable individuals.
If you have insights to share about this topic, please contact us confidentially using the following methods:
Secure Messages in Guardian App
The Guardian app provides a feature for submitting story tips. Messages are end-to-end encrypted and incorporated within the routine functions of all Guardian mobile apps, keeping your communications private.
If you haven’t installed the Guardian app yet, feel free to download it (iOS/Android) and navigate to the menu. Choose Secure Messaging.
SecureDrop, Instant Messenger, Email, Phone, and Postal Mail
Refer to our guide at guardian.com/tips for other methods and their respective advantages and disadvantages.
Illustration: Guardian Design / Rich Cousins
Although the AI overview represents only a portion of Google search, UK publishers report feeling its effects already. MailOnline executive Carly Stephen noted a significant decline in clicks from search results featuring AI summaries in May, with click-through rates falling by 56.1% on desktop and 48.2% on mobile devices.
Legal actions against the UK’s Competition and Markets Authority involve partnerships with the technology justice organization FoxGlove, the Independent Publishers Alliance, and advocates for the Open Web movement.
Owen Meredith, the CEO of the News Media Association, accused Google of “keeping users within their own enclosed spaces and trying to monetize them by incorporating valuable content, including news produced through significant efforts of others.”
“The current circumstances are entirely unsustainable, and eventually, quality information will be eliminated online,” he stated. “The Competition and Markets Authority possesses tools to address these challenges, and action must be taken swiftly.”
Rosa Curling, Director of FoxGlove, remarked that the new research highlights “the devastating effects the Google ‘AI Overview’ has already inflicted on the UK’s independent news sector.”
“If Google merely takes on the job of journalists and presents it as its own, that would be concerning enough,” she expressed. “But what’s worse is that they use this work to promote their own tools and advantages while making it increasingly difficult for the media to connect with the readers vital for their survival.”
The importance of online safety for children in the UK is reaching a pivotal moment. Starting this Friday, social media and other internet platforms must take action to safeguard children or face substantial fines for non-compliance.
This marks a critical evaluation of the online safety law, a revolutionary regulation that encompasses platforms like Facebook, Instagram, TikTok, YouTube, Google, and more. Here’s an overview of the new regulations.
What will happen on July 25th?
Companies subject to the law are required to implement safety measures that shield children from harmful content. Specifically, all pornography sites must establish stringent age verification protocols. According to Ofcom, the UK communications regulator, 8% of children aged 8 to 14 accessed online pornographic sites or apps within a month.
Furthermore, social media platforms and major search engines must block access for children to pornography and content that promotes or encourages suicide, self-harm, and eating disorders. This may involve completely removing certain feeds for younger users. Hundreds of businesses will be impacted by these regulations.
Platforms must also minimize the distribution of other potentially harmful content, such as promoting dangerous challenges, substance abuse, or instances of bullying.
What are the suggested safety measures?
Recommended measures include: Algorithms that suggest content to users must exclude harmful materials. All sites and applications must implement procedures to rapidly eliminate dangerous content. Additionally, children should have a straightforward method to report concerns. Compliance is flexible if businesses believe they have effective alternatives to meet their child safety responsibilities.
Services deemed “high risk”, like major social media platforms, must utilize “highly effective” age verification methods to identify users under 18. If a social media platform is found hosting harmful content without age checks, it is responsible for ensuring a “positive” user experience.
X states that if it cannot determine a user’s age as 18 or older, it defaults to sensitive content settings, thereby restricting adult material. They are also integrating age estimation technology and ID verification to ensure users are not underage. Meta, the parent company of Instagram and Facebook, claims to have a comprehensive approach to age verification that includes a teen account feature set by default for users under 18.
“We collaborate with the law firm Payne Hicks Beach,” noted Mark Jones, a partner at the firm. “[Online Safety Act] If not, we strive to clarify it for the company.”
The Molly Rose Foundation, set up by the family of British teenager Molly Russell, who tragically lost her life in 2017 due to harmful online content, is advocating for further changes, including the prohibition of perilous online challenges and requiring platforms to proactively mitigate depressive and body image-related content.
How will age verification be implemented?
Some age verification methods for pornographic providers supported by OFCOM include: assessing a person’s age through live photos and videos (face age estimation), verifying age via credit card, bank, or mobile network operator, matching photo ID, and utilizing a “digital identity wallet” that contains proof of age.
Ria Moody, a lawyer at Linklaters, commented, “Age verification measures must be highly accurate. OFCOM indicates these measures are ineffective unless they ensure the user is over 18, so platforms should not rely solely on them.”
What does this mean in practice?
Pornhub, the UK’s most frequented online porn site, has stated it will implement a “regulatory approved age verification method” by Friday, though specific methods have yet to be disclosed. Another adult site, OnlyFans, is already using facial age verification software, which estimates users’ ages without saving their facial images, relying instead on data from millions of other images. A company called Yoti provides this software and has also made it available on Instagram.
Last week, Reddit began verifying the age of forums and threads containing adult content. The platform utilizes technology from a company named Persona, which verifies age using uploaded selfies or government-issued ID photos. Reddit does not retain the photos, instead storing validation statuses to streamline the process for users.
How accurate is facial age verification?
The software allows websites or apps to set a “challenge” age (e.g., 20 or 25) to minimize the number of underage users accidentally accessing content. When Yoti set a challenge age of 20, less than 1% of 13-17-year-olds were mistakenly verified.
What other methods are available?
Another direct approach entails requiring users to present formal identification, like a passport or driver’s license. Importantly, the ID details need not be stored and can be used solely to verify access.
Will all pornographic sites conduct age checks?
They are expected to, but many smaller sites might try to circumvent the regulations, fearing it will deter demand for their services. Industry representatives suggest that those who disregard the rules may await Ofcom’s response to violations before determining their course of action.
How will child protection measures be enforced?
Ofcom has a broad spectrum of penalties it can impose under the law. Companies can face fines of up to £18 million or 10% of their global revenue for violations—potentially amounting to $16 billion for Meta. Additionally, sites or apps can receive formal warnings. For severe violations, Ofcom may seek a court order to restrict the availability of the site or app in the UK.
Moreover, senior managers at technology firms could face up to two years in prison if they are found criminally liable for repeated breaches of their obligations to protect children and for ignoring enforcement notices from Ofcom.
In the image, a group of friends is gathered at the bar, with smoke curling upwards from a cigarette in hand. Additional cigarettes are in open packets resting on the table between them. This is not a photo from before the ban, but rather one shared on social media from a Metaverse gathering.
Virtual online environments are emerging as a new frontier for marketing, as tobacco and alcohol promoters target the youth without facing any legislative repercussions.
A report presented at the World Conference on Tobacco Management in Dublin last month provided several examples. New technologies such as digital token launches and sponsorships from vaping companies in online games are being utilized to promote smoking and vaping.
This information is derived from a surveillance initiative known as Canary—acting like a canary in the coal mine. The project is managed by public health organizations around the globe.
The caption for this post reads, “I’m drinking coffee at Metaverse.” Has someone stolen the writer? Photo: Icperience.id Instagram via Instagram
“Cigarette companies are no longer waiting for regulations to catch up. They are proactively advancing while we’re still trying to comprehend what’s happening on social media, and they’re already operating in unregulated spaces like the Metaverse.” “They utilize NFTs [non-fungible tokens] and immersive events to attract young audiences to their offerings.”
In India, one tobacco company has launched an NFT symbolizing ownership of digital assets, celebrating its 93rd anniversary.
Canary monitors and analyzes tobacco marketing on various social media platforms and news sites in India, Indonesia, and Mexico, and has recently expanded to Brazil and China, covering alcohol and ultra-processed food marketing as well.
The Metaverse is not fully monitored. This 3D immersive internet allows interactions in digital environments using technologies like virtual reality headsets. However, references to activities happening there are captured through links and information shared on traditional social media platforms.
Researchers suggest that children are more susceptible to tobacco marketing in this new digital arena, given the age demographics—over half of the active Metaverse users are under 13 years old.
Social media companies possess extensive insights into how to boost engagement and attract users back for more, according to Dr. Mary-Ann Etiebet, CEO of Vital Strategies.
“When you combine this with the tobacco industry’s experience in hooking individuals, these two elements converge in a murky, unknown space.”
However, Magsambol describes it as “a new battleground for all of us,” shifting towards entities pushing products that are detrimental to health.
“My daughter is usually quite reserved, but in [the gaming platform] Roblox, while battling zombies and ghosts, she morphs into an avatar resembling a blend of Alexander the Great, Bruce Lee, and John Wick. She becomes quite bloodthirsty,” she remarked.
“Our behaviors shift. Social norms evolve… the tobacco industry is highly aware of this, making it easier to subtly promote the idea that anything is possible.”
The Metaverse art encountered by the team in Indonesia was showcased on the Instagram account of music enthusiasts linked to Djarum, one of Indonesia’s largest tobacco firms. Another instance highlighted a group enjoying coffee searching for something lighter.
All of this contributes to an initiative aimed at “normalizing” smoking and vaping, according to Magsambol. “Such behaviors are enacted by your avatar, but do they seep into your real life?”
“Digital platforms are being leveraged to evade traditional advertising barriers and appeal to younger audiences,” she states. “This scenario reflects not merely a shift in marketing strategies, but a transformation in influence dynamics.”
Other researchers have presented instances where alcohol is marketed and sold in virtual stores.
Online marketing constitutes a global concern. At the same conference, 53% of Irish researchers reported having seen e-cigarette posts daily on social media.
Officials from the World Health Organization (WHO) note that the increase in youth smoking in Ukraine can be partially attributed to Covid and the war pushing children “online,” exposing them to various forms of marketing.
In India, youth ambassador Agamloop Kaur is leading a campaign for children to stay cigarette-free, which includes social media marketing to educate school children about the risks associated with cigarettes and vaping. She has noticed vapes being marketed as “wellness” products.
“I believe it’s crucial to educate young individuals about recognizing ads, understanding their implications, and realizing that they might not even be visibly tied to the tobacco industry. [Content posted by] influencers hold significant sway, as they help build awareness. Digital natives, when engaged on social media, can discern what’s genuine and what’s not; recognizing these attractions as empty is vital, especially for younger audiences.”
The WHO Framework Convention on Tobacco Control mandates strict regulations regarding tobacco advertising, promotions, and sponsorships. Last year, signatories acknowledged the necessity for action to focus on “digital marketing channels such as social media that amplify tobacco marketing exposure among adolescents and young individuals.”
A boy smokes a cigarette in Yogyakarta, Indonesia. Photo: Ulet Ifansasti/Getty Images
Yet, there are no straightforward solutions, as Andrew Black from the framework’s secretariat points out.
“The difficulty in regulating the Internet isn’t inherently linked to cigarettes. Rather, it’s a tangible challenge for governments to devise ways to safeguard societal norms in a landscape where technological advancements have transcended borders.”
Nandita Murktla, who leads the Canary initiative, urges regulators to exercise caution:
The UK’s primary media regulator has vowed to deliver a “significant milestone” in the pursuit of online safety for children, although it has cautioned that age verification measures must enforce stricter regulations on major tech firms.
Ofcom’s chief, Melanie Dawes, will unveil a new framework on Sunday. To be introduced later this month, marking a pivotal change in how the world’s largest online platforms are regulated.
However, she faces mounting pressure from advocates, many of whom are parents who assert that social media contributed to the deaths of their children, claiming that the forthcoming rules could still permit minors to access harmful content.
Dawes stated to the BBC on Sunday: “This is a considerable moment because the law takes effect at the end of the month.”
“At that point, we expect broader safeguards for children to become operational. We aim for platforms that host material inappropriate for under-18s, such as pornography and content related to suicide and self-harm, to either be removed or to implement robust age checks for those materials.”
She continued: “This is a significant moment for the industry and a critical juncture.”
Melanie Dawes (left) remarked that age checks are “a significant milestone for the industry.” Photo: Jeffover/BBC/PA
The regulations set to take effect on July 25th are the latest steps under the online safety law enacted in 2023 by the Conservative government.
The legislation was partially influenced by advocates like Ian Russell, whose 14-year-old daughter, Molly, tragically took her own life in 2017 after being exposed to numerous online resources concerning depression, self-harm, and suicide.
Minister Tory Removing certain bill sections has been criticized for potentially neglecting regulations on “legal but harmful” content in 2022.
Russell, who previously referred to the ACT as “timid,” expressed concerns regarding its enforcement by Ofcom on Sunday. He noted that while regulators allow tech companies to self-determine validation checks, they will evaluate the effectiveness of these measures.
Russell commented: “Ofcom’s public relations often portray a narrative where everything will improve soon. It’s clear that Ofcom must not only prioritize PR but must act decisively.”
“They are caught between families who have suffered losses like mine and the influence of powerful tech platforms.”
Ian Russell, a father currently advocating for child internet safety, expressed concerns about the enforcement of the law. Photo: Joshua Bratt/PA
Russell pressed Dawes to leverage her influence to urge the government for more stringent actions against tech companies.
Some critics have charged the minister with leaving substantial regulatory loopholes, including a lack of action against misinformation.
A committee of lawmakers recently asserted that social media platforms facilitated the spread of misinformation following a murder in Southport last year, contributing to the unrest that ensued. Labour MP Chi Onwurah, chair of the Science and Technology Committee, remarked that the online safety law “is unraveling.”
Dawes has not sought authority to address misinformation, but stated, “If the government chooses to broaden the scope to include misinformation or child addiction, Ofcom would be prepared to implement it.”
Nonetheless, she called out the BBC regarding their handling of Glastonbury’s coverage, questioning whether the lead singer should continue broadcasting footage of Bob Dylan’s performance amid anti-Israel chants.
“The BBC needs to act more swiftly. We need to investigate these incidents thoroughly. Otherwise, there’s a genuine risk of losing public trust in the BBC,” she stated.
wWhile I browse social media, I often feel disheartened by the overwhelming negativity, as if the world is ablaze with hatred. Yet, stepping into the streets of New York City for a coffee or lunch with friends presents a stark contrast—everything feels calm. This disparity between the digital realm and my everyday life is jarring.
My work addresses issues like intergroup conflict, misinformation, technology, and climate change, highlighting humanity’s challenges. Interestingly, online discussions mirror fervor over events such as the White Lotus finale and the most recent YouTuber scandal. Everything seems either exaggeratedly amazing or utterly terrible. But is that truly how most of us feel? No. Recent research indicates that the online environment is skewed by a tiny, highly active user base.
In a paper I co-authored with Claire Robertson and Carina Del Rosario, we found significant evidence that social media does not neutrally represent society; instead, it acts as a fanhouse mirror amplifying extreme voices while obscuring more moderate and nuanced perspectives. Much of this distortion stems from a small percentage of overactive online users, where just 10% of users generate about 97% of political tweets.
Take Elon Musk’s own Platform X as a case in point. Despite its vast user base, a select few create the majority of political content. For instance, Musk tweeted 1,494 times within the first 15 days of implementing government efficiency cuts (DOGE). His prolific posting often spread misinformation to 221 million followers.
On February 2nd, he claimed, “Did you know that USAID used your taxes to kill millions in a funded bioweapon study, including Covid-19?” This fits a pattern of misinformation dissemination by a small number of users, where just 0.1% share 80% of false news. Twelve accounts, dubbed the “disformation dozens,” were responsible for much of the vaccine misinformation seen on Facebook during the pandemic, creating a misleading perception of vaccine hesitancy.
Similar trends can be identified across the digital landscape. While a small faction engages in toxic behaviors, they disproportionately share hostile or misleading content on various platforms, from Facebook to Reddit. Most individuals do not contribute to fueling the online outrage; however, superusers dominate our collective perception due to their visibility and activity.
This leads to broader societal issues, as humans form mental models of what they perceive others think, shaping social norms and group dynamics. Unfortunately, on social media, this shortcut can misfire. We encounter not a representative sampling of views, but rather an extreme flow of emotionally charged content.
Consequently, many individuals mistakenly believe society is much more polarized and misinformed than it is. I tend to view those across generational gaps, political divisions, or fandoms as radical, malicious, or simply foolish. Our information diets are shaped by a sliver of humanity that incessantly posts about their work, identity, or obsessions.
Such distortion fosters pluralistic ignorance, affecting actions based on a misinterpretation of collective beliefs and behaviors. Think of voters who only witness outrage-driven narratives, leading them to assume there’s no common ground on issues like immigration and climate change.
Yet, the challenge isn’t solely about extremists—it’s the design and algorithms of these platforms that exacerbate the situation. Built to boost engagement, these algorithms favor sensational or divisive content, promoting users who are most likely to skew shared realities.
The issue is compounding. Imagine a bustling restaurant where soon it seems everyone is shouting. The same dynamics play out online, with users exaggerating their views to capture attention and approval. Even those who might not typically be extreme may mirror such behavior in order to gain traction.
Most of us are not diving into trolling battles on our phones; we’re preoccupied with family, friends, or simply seeking lighthearted entertainment online. Yet, our voices are overshadowed. We have effectively surrendered the mic to the most divisive individuals, allowing them to dictate norms and actions.
With over 5 billion people engaging on social media, this technology is here to stay. However, the toxic dynamics I’ve described don’t have to prevail. The initial step is recognizing this illusion and understanding that a silent majority often exists behind every heated thread. As users, we can take back control by curating our feeds, avoiding anger traps, and ignoring sensational content. Consider it akin to adopting a healthier, less processed informational diet.
In a recent series of experiments, we compensated participants to unlock the most divisive political narratives in X. A month later, they reported 23% less hostility towards opposing political groups. Their experiences were so positive that nearly half chose not to return to their hostile narratives post-study. Furthermore, those who nurtured a healthier news feed reported diminished hostility even 11 months later.
Platforms can easily adjust algorithms to avoid highlighting the most outrageous voices, instead prioritizing more balanced or nuanced content. This is what most people desire. The Internet is a powerful tool that can provide value. However, if we continue to reflect only a distorted funhouse version of reality shaped by extreme users, we will all face the repercussions.
Jay Van Bavel is a psychology professor at New York University.
The quantity of online videos depicting child sexual abuse created by artificial intelligence has surged as advancements in technology have impacted pedophiles.
According to the Internet Watch Foundation, AI-generated abuse videos have surpassed a critical level, nearing a point where they can nearly measure “actual images,” with a notable increase observed this year.
In the first half of 2025, the UK-based Internet Safety Watchdog examined 1,286 AI-generated videos containing illegal child sexual abuse material (CSAM), a sharp increase from just two during the same period last year.
The IWF reported that over 1,000 of these videos fall under Category A abuse, the most severe classification of such material.
The organization indicated that billions have been invested in AI, leading to a widely accessible video generation model that pedophiles are exploiting.
“It’s a highly competitive industry with substantial financial incentives, unfortunately giving perpetrators numerous options,” stated an IWF analyst.
This video surge is part of a 400% rise in URLs associated with AI-generated child sexual abuse content in the first half of 2025, with IWF receiving reports of 210 such URLs compared to 42 last year.
IWF discovered one post on a Dark Web Forum where a user noted the rapid improvements in AI and how pedophiles had rapidly adapted to using an AI tool to “better interact with new developments.”
IWF analysts observed that the images seem to be created by utilizing free, basic AI models and “fine-tuning” these models with CSAM to produce realistic videos. In some instances, this fine-tuning involved a limited number of CSAM videos, according to IWF.
The most lifelike AI-generated abuse videos encountered this year were based on actual victims, the Watchdog reported.
Interim CEO of IWF, Derek Ray-Hill, remarked that the rapid advancement of AI models, their broad accessibility, and their adaptability for criminal purposes could lead to a massive proliferation of AI-generated CSAM online.
“The risk of AI-generated CSAM is astonishing, leading to a potential flood that could overwhelm the clear web,” he stated, cautioning that the rise of such content might encourage criminal activities like child trafficking and modern slavery.
The replication of existing victims of sexual abuse in AI-generated images allows pedophiles to significantly increase the volume of CSAM online without having to exploit new victims, he added.
The UK government is intensifying efforts to combat AI-generated CSAM by criminalizing the ownership, creation, or distribution of AI tools designed to produce abusive content. Those found guilty under this new law may face up to five years in prison.
Additionally, it is now illegal to possess manuals that instruct potential offenders on how to use AI tools for creating abusive images or for child abuse. Offenders could face up to three years in prison.
In a February announcement, Interior Secretary Yvette Cooper stated, “It is crucial to address child sexual abuse online, not just offline.”
AI-generated CSAM is deemed illegal under the Protection Act of 1978, which criminalizes the production, distribution, and possession of “indecent or false images” of children.
I plan to purchase the Guardian from the newspaper publisher. Digital mediums are intertwined with analog ones, allowing you to discover trends through print. I’ll grab it a bit later. I got some insights after reading the Economist last week.
I wish technology had paused in 1996, when playing Mario Kart was sufficient but not life-altering. Just a need for Microsoft Excel was all that was required.
Aidan Jones: 10 Funniest Things I’ve Encountered (On the Internet)
read more
I avoid social media, leading others to think I possess mental clarity. Yet, I’ll attribute it to the shop hours of Harvey Norman.
Here are my favorite moments I’ve stumbled upon on TV via YouTube late at night. They all seem to speak for themselves.
It feels very immersive, as if you’re right there, even while feeling uncomfortable.
Dan Russ is a comedian. He will be performing his award-winning short “Tropical Death Paucity” at Monkey Barrel Cabaret Voltaire from July 31st to August 24th. Edinburgh Fringe.
Teenager language may make online bullying difficult to detect
Vitapix/Getty Images
The terminology of Generation Alpha is evolving faster than educators, parents, and AI can keep up with.
Manisha Meta, a 14-year-old student from Warren E Hyde Middle School in Cupertino, California, alongside Fausto Giunchiglia from the University of Trent in Italy, examined 100 expressions popular among Generation Alpha, those born from 2010 to 2025, sourced from gaming, social media, and video platforms.
The researchers then asked 24 classmates of Mehta, aged between 11 and 14, to evaluate these phrases along with contextual screenshots. The volunteers assessed their understanding of the phrases, the contexts in which they were used, and if they carried potential safety risks or harmful interpretations. They also consulted their parents, professional moderators, and four AI models (GPT-4, Claude, Gemini, and Llama 3) for the same analysis.
“I’ve always been intrigued by Generation Alpha’s language because it’s so distinctive; relevance shifts rapidly, and trends become outdated just as quickly,” says Mehta.
Among the Alpha generation volunteers, 98% grasped the basic meaning of a given phrase, 96% understood the context of its use, and 92% recognized instances of harmful intent. In contrast, the AI model could identify harmful usage only around 40% of the time, with Claude stumbling from 32.5% to 42.3%. Parents and moderators also fell short, detecting harmful usages in just one-third of instances.
“We expected a broader comprehension than we observed,” Mehta reflects. “Much of the feedback from my parents was speculative.”
Common phrases from Generation Alpha often have double meanings based on context. For instance, “Let’s Cook His” can signify genuine praise in gaming but may also mockingly refer to someone rambling incoherently. “Kys,” once short for “know yourself,” has now been repurposed to mean “kill yourself.” Another phrase that could hide malicious intent is, “Is it acoustic?”
“Generation Alpha is exceedingly vulnerable online,” says Meta. “As AI increasingly dominates content moderation, understanding the language used by LLMs is crucial.”
“It’s evident that LLMs are transforming the landscape,” asserts Giunchiglia. “This presents fundamental questions that need addressing.”
The results were published this week at the Computing Machinery Conference Association on Equity, Accountability and Transparency in Athens, Greece.
“Empirical evidence from this research highlights significant shortcomings in content moderation systems, especially concerning the analysis and protection of young individuals,” notes Michael Veal from University College London. “Companies and regulators must heed this and adapt as regulations evolve in jurisdictions where platform laws are designed to safeguard the youth.”
Since the uproar surrounding the immigration attacks in Los Angeles began, a wave of inaccurate and misleading claims about ongoing protests has proliferated across text-based social networks. As Donald Trump significantly ramped up federal involvement, falsehoods shared on social media intertwined with misinformation propagated through channels established by the White House. This blend of genuine and deceptive information creates a distorted representation of a city that strays from the truth.
Various regions in Los Angeles have experienced substantial protests over the last four days in response to intensified immigration policies from the US presidential administration. Dramatic images circulated on Saturday from downtown Los Angeles depicted a car ablaze amid clashes with law enforcement. Many posts fostered the impression that chaos and violence engulfed the entirety of Los Angeles, despite the fact that disturbances remained limited to specific areas within the sprawling city. Trump sent 2,000 National Guard troops to the city without the consent of California Governor Gavin Newsom, who has prompted the state to sue over this alleged infringement of sovereignty. Additionally, Defense Secretary Pete Hegses has ordered approximately 700 Marines to be deployed to the city.
As misinformation proliferates amid both street-level and legal confrontations, the intersection of lies and conflict is evident. Social media often acts as a catalyst for the spread of falsehoods, a trend noted during recent wildfires in Los Angeles, catastrophic hurricanes, and the COVID-19 pandemic.
Among the most egregious disinformation is the circulation of a video featuring Mexican President Claudia Sheinbaum by conservative Russian accounts, leading into the protests and inciting the demonstrations showcased on the Mexican flag, as reported by the misinformed Watchdog News Guard. These misleading posts — crafted by Benny Johnson on Twitter/X, referencing pro-Trump outlets like wltreport.com and Russian state media RG.RU — garnered millions of views, according to the organization. On June 9th, Sheinbaum stated to reporters:
Posts about bricks stir up a mixture of real and fake news
Conspiracy-minded conservatives are quick to latch onto familiar tropes. A post on X claimed that the “Soros Funding Organization” had garnered over 9,500 retweets regarding brick pallets near Immigration Customs Enforcement (ICE) facilities, racking up more than 800,000 views. George Soros remains a recurring figure in right-wing conspiracy narratives, with the post similarly implicating LA Mayor Karen Bass and California Governor Gavin Newsom in the supposed shortage of supplies.
I encountered a post that read, “It’s a civil war!!!”
The images of stacked bricks originate from a Malaysian construction supplier, and the myth that these bricks were distributed to protesters dates back to the 2020 Black Lives Matter demonstrations. Users on X shared insights regarding the “Community Notes,” while X’s built-in AI chatbot Grok also provided fact-checks in response to inquiries about the authenticity of the post.
In response to the hoax imagery, some X users shared a link to Real footage showing protesters slamming concrete bollards, intertwining truths and falsehoods, and obscuring the reality of the situation. Independent journalists who showcased the footage claimed it depicted projectiles hurled at police, although the footage revealed no such actions.
The Social Media Lab, a research group at Toronto Metropolitan University, was referenced in Blueski.
Trump and the White House are covered in mud
Trump himself fueled narratives suggesting that the protests were orchestrated and dominated by external agitators lacking genuine concern for local issues.
“These individuals are not protesters; they are troublemakers and anarchists,” Trump asserted on Truth Social, later screenshot and shared by Elon Musk on X. Others within the administration echoed similar sentiments on social media.
Los Angeles Times reporter noted that the White House claimed certain Mexican citizens had been arrested for assaulting an officer “during the riot.” However, it was established that customs and border protection agents had detained him prior to the protest’s commencement.
Sowing misleading information and fostering distrust
Trump has escalated the frequency of ICE raids nationwide, amplifying deportation fears throughout Los Angeles. Anti-ICE posts are also circulating misinformation, according to the Social Media Lab. One concerning post on Blueski, labeled “breaking,” alleged that a federal agent had just arrived at an LA elementary school seeking to interrogate first graders, when in reality, the incident occurred two months prior. Researchers have identified such posts as “Rage-Farming to Push Merch.”
The conspiracy platform Infowars has initiated a broadcast on X titled “Live Watch: LA ICE Riots Spread Across Major Cities Nationwide.” While protests against deportation have emerged in various locations, the level of confusion observed in Los Angeles is unmatched. The broadcast attracted 13,000 viewers simultaneously as X, a Los Angeles news service, aired coverage four nights after the immigration protest.
The spread of erroneous reporting undermines X’s credibility as a news platform, yet it continues to promote itself as the leading news application in the US, or more recently, in Qatar. Older images and videos are combined with new to instill doubts about legitimate news. After taking over Twitter in late 2022, Musk has endorsed user-generated fact-checking via the “Community Notes” feature, but has dismantled numerous internal avenues designed to counter misinformation. Particularly with the 2024 US presidential election approaching, researchers indicate that Musk himself has become a significant facilitator of misinformation, posting and resharing misleading claims that garnered around 2 billion views on numerous occasions. The Center for Countering Digital Hate.
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.