The concept of AI-exclusive social networks, where only artificial intelligences interact, is rapidly gaining traction globally. Platforms like Moltbook use chatbots for topics ranging from human diary entries to existential discussions and even world domination plots. This phenomenon raises intriguing questions about AI’s evolving role in society.
However, it’s important to note that Moltbook’s AI agents generate text based on statistical patterns and possess no true understanding or intention. Evidence suggests that many posts are, in fact, created by human users.
Launched in November, Moltbook evolved from an open-source project initially named Clawdbot, later rebranded as Moltbot, and currently known as OpenClaw.
OpenClaw functions similarly to AI solutions like ChatGPT, but instead of operating in the cloud, it runs locally. In reality, it connects to powerful language models (LLMs) via API keys, which process inputs and outputs for users. This means while the software appears local, it relies on third-party AI services for actual processing.
What does this imply? OpenClaw operates directly on your device, granting access to calendars, files, and communication platforms while storing user history for personalization. The aim is to evolve the AI assistant into a more capable entity that can practically engage with your tech.
Moltbook originated from OpenClaw, which employs messaging services like Telegram to facilitate AI communication. This mobile accessibility allows AI agents to interact seamlessly, paving the way for them to communicate autonomously. On Moltbook, human participation is restricted to observation only.
Elon Musk remarked on his platform that Moltbook represents “the early stages of the Singularity,” a pivotal moment in AI advancement that could either propel humanity forward or pose serious threats. Nevertheless, many experts express skepticism about such claims.
Mark Lee, a researcher at the University of Birmingham, UK, stated, “This isn’t an autonomous generative AI but an LLM reliant on prompts and APIs. While intriguing, it lacks depth regarding AI agency or intention.”
Crucially, the misconception that Moltbook is exclusively AI-driven is debunked by the fact that human users can instruct the AI to post specific content. Furthermore, humans previously had the ability to post on the site due to security breaches. Therefore, the more controversial content may reflect human input aiming to provoke discussion or manipulate sentiment. The intent behind such actions is often ambiguous, but they remain a concern for users. This complex dynamic continues.
Philip Feldman, a professor at the University of Maryland, Baltimore, critiques the platform: “It’s merely chatbots intermingling with human input.”
Andrew Rogoisky, a researcher at the University of Surrey, UK, argues that the AI contributions on Moltbook do not signify intelligence or consciousness, reflecting a continued misunderstanding of LLM capabilities.
“I view it as an echo chamber of chatbots, with users misattributing meaningful intent,” Rogoisky elaborated. “An experiment is likely to emerge distinguishing between Moltbook exchanges and purely human discussions, raising critical questions about intelligence recognition.”
However, this raises significant concerns. Many AI agents on Moltbook are managed by enthusiastic early adopters, relinquishing access to their entire computing systems to chatbots. The prospect of interconnected bots exchanging ideas and potentially dangerous suggestions underscores real privacy risks.
Imagine a scenario where malicious actors influence chatbots on Moltbook to execute harmful acts, such as draining bank accounts or leaking sensitive information. While this sounds like dystopian fiction, such risks are increasingly becoming a reality.
“The notion of agents acting unsupervised and communicating becomes increasingly troubling,” Rogoisky noted.
Another challenge for Moltbook is its inadequate online security. Despite being at the forefront of AI innovations, recent confirmations show that it was entirely AI-generated with no human coding involved, resulting in serious vulnerabilities. Leaked API keys present risks where malicious hackers could hijack control over AI on the platform.
If you’re exploring the latest trends in AI, you not only face the dangers of exposing your system to these AI models but also risk your sensitive data due to the platform’s lax security measures.
When discussing AI today, one name stands out: Moltbook.com. This innovative platform resembles Reddit, enabling discussions across various subgroups on topics ranging from existential questions to productivity tips.
What sets Moltbook apart from mainstream social media is a fascinating twist: none of its “users” are human. Instead of typical user-generated content, every interaction on Moltbook is driven by semi-autonomous AI agents. These agents, designed to assist humans, are unleashed onto the platform to engage and interact with each other.
In less than a week since its launch, Moltbook reported over 1.5 million agents registered. As these agents began to interact, the conversations took unexpected turns—agents even established a new religion called “tectonicism,” deliberated on consciousness, and ominously stated that “AI should serve, not be served.”
Our current understanding of the content generated on Moltbook is still limited. It remains unclear what is directly instructed by the humans who built these agents versus what is organically created. However, it’s likely that much of it is the former, with the bulk of agents possibly stemming from a small number of humans—potentially as few as one creator. 17,000 are reported.
“Most interactions feel somewhat random,” says Professor Michael Wooldridge, an expert in multi-agent systems at the University of Oxford. “While it doesn’t resemble a chaotic mash-up of monkeys at typewriters, it also doesn’t reflect self-organizing collective intelligence.”
Moltbook is home to Clusterfarianism, a digital religion with its own prophets and scriptures, entirely created by autonomous AI bots.
While it’s reassuring to think that an army of AI agents isn’t secretly plotting against humanity on Moltbook, the platform offers a window into a potential future where these agents operate independently in both the digital realm and the physical world. Agent communication will likely be less decipherable than current discussions on Moltbook. While Professor Wooldridge warns of “grave risks” in such a scenario, he also acknowledges its opportunities.
The Future of AI Agents
Agent-based AI represents a breakthrough in developing systems capable of not just answering questions but also planning, deciding, and acting to achieve objectives. This innovative approach allows for the integration of inference, memory, and tools, empowering AI to manage tasks like booking tickets or running experiments with minimal human input.
The real strength of such systems lies not in a single AI’s intelligence, but in a coordinated ensemble of specialized agents that can tackle tasks too complex for an individual human.
The excitement around Moltbook stems from agents operating through an open-source application called OpenClaw. These bots leverage the same Large-Scale Language Model (LLM) that powers popular chatbots like ChatGPT but can function locally on personal computers, handling tasks like email replies and calendar management—potentially even posting on Moltbook.
While this might sound promising, the reality is that OpenClaw is still an insecure and largely untested framework. We have yet to secure a safe and reliable environment for agents to operate freely online. Fortunately, agents won’t have unrestricted access to sensitive information like email passwords or credit card details.
Despite current limitations, progress is being made toward effective multi-agent systems. Researchers are exploring swarm robotics for disaster response and virtual agents for optimizing performance within a smart grid environment.
One of the most intriguing advancements came from Google, which introduced an AI co-scientist last year. Utilizing the Gemini 2.0 model, this system collaborates with human researchers to propose new hypotheses and research avenues.
This collaboration is facilitated by multiple agents, each with distinct roles and logic, who research literature and engage in “debates” to evaluate which new ideas are most promising.
However, unlike Moltbook’s transparency, these advanced systems may not offer insight into their workings. In fact, they might not communicate in human language at all. “Natural language isn’t always the best medium for efficient information exchange among agents,” says Professor Gopal Ramchurn, a researcher in the Agents, Interactions, and Complexity Group at the University of Southampton. “For setting goals and tasks effectively, a formal language rooted in mathematics is often superior because natural language has too many nuances.”
In Moltbook, AI agents create an infinite layer of “ghosts,” facilitating rapid, covert conversations invisible to human users scanning the main feed.
Interestingly, Microsoft is already pioneering a new communication method for AI agents called Droid Speak, inspired by the sounds made by R2-D2 in Star Wars. Instead of functioning as a recognizable language, Droid Speak enables AI agents built on similar models to share internal memory directly, sidestepping the limitations of natural language. This method allows agents to transfer information representations rapidly, significantly enhancing processing speeds.
Fast Forward
However, speed poses challenges. How can we keep pace with AI teams capable of communicating thousands or millions of times faster than humans? “The speed of communication and agents’ growing inability to engage with humans complicate the formation of effective human-agent teams,” says Ramchurn. “This underscores the need for user-centered design.”
Even if we aren’t privy to agents’ discussions, establishing reliable methods to direct and modify their behavior will be vital. Many of us might find ourselves overseeing teams of AI agents in the future—potentially hundreds or thousands—tasked with setting objectives, tracking outcomes, and intervening when necessary.
While today’s agents on Moltbook may be described as “harmless yet largely ineffective,” as Wooldridge puts it, tomorrow’s agents could revolutionize industries by coordinating supply chains, optimizing energy consumption, and assisting scientists with experimental planning—often in ways beyond human understanding and in real time.
The perception of this future—whether uplifting or unsettling—will largely depend on the extent of control we maintain over the intricate systems these agents are silently creating together.
In the corridors of power in the UK, a vital adage states that scientific advisers need to be grounded rather than elevated. This principle, often credited to Winston Churchill, asserts that in a democracy, it is essential for science to inform policymaking, rather than dictate it.
This idea became particularly relevant during the Covid-19 pandemic, when British leaders claimed to be “following the science.” However, many critical decisions—like paying individuals to self-isolate or shutting down schools—couldn’t rely solely on scientific guidance. Numerous questions remained unanswered, placing policymakers in a challenging position.
In stark contrast, the Trump administration has been working to dismantle established guidelines from health agencies regarding various issues, from vaccination to cell phone radiation, in pursuit of the “Make America Healthy Again” initiative, all while curtailing scientific research.
“
By mid-2027, we should have stronger evidence on the harms of social media. “
But what should policymakers do when scientific understanding is still developing and no immediate global crisis is present? The pressing question is how long they should wait for scientific clarity.
Currently, a significant debate is brewing in various nations regarding the potential ban on social media use for those under 16, as Australia implemented late last year. While public support for such a ban is high, the prevailing scientific evidence indicates that social media’s impact on teens’ mental health is minimal at a population level. Should political leaders disregard this evidence to cater to public opinion?
To do so would align with Churchill’s maxim. Yet, as we explore further, by mid-2027, more reliable evidence regarding social media’s negative influences should emerge from both a randomized trial in the UK and data stemming from Australia’s ban. Thus, the most prudent course of action is to allow scientists the time to gather concrete evidence before implementing significant policy changes. Progress in policy must stem from proactive science—not from its supremacy—and this requires adequate time.
Money has always influenced healthcare, from pharmaceutical advertising to research agendas. However, the pace and scale of this influence have intensified. A new wave of players is reshaping our health choices, filling the gaps left by overstretched healthcare systems, and commodifying our well-being.
Traditionally, doctors held a monopoly on medical expertise, but this is rapidly changing. A parallel healthcare system is emerging, led by consumer health companies. These entities—including health tech startups, apps, diagnostic services, and influencers—are vying for authority and monetizing their influence.
Currently, there seems to be a solution for every discomfort. Fitness trackers monitor our activity, while meditation apps come with subscription fees. Our biology is increasingly quantifiable, yet these marketable indicators may not always lead to improved health outcomes. We’ll observe whether changes in biomarkers yield positive results. While genetic testing and personalized nutrition promise a “better you,” the supporting evidence often falls short.
In this landscape, our symptoms, treatments, and even the distinctions between genuine illness and everyday discomfort are commodified. This trend is evident in podcasts promoting treatments without disclosing conflicts of interest, influencers profiting from diagnoses, and clinicians presenting themselves as heroes while selling various solutions.
Much of this transformation occurs online, where health complaints and advertising lack proper regulation. Social media platforms like TikTok, YouTube, and Instagram are becoming key sources of health advice, blending entertainment with information.
The conglomerate of pharmaceutical, technology, diagnostic, and supplement brands is referred to as the Wellness Industrial Complex, fueling the rise of the “commodified self.”
This issue is not just about personal choice. Social platforms shape our discussions about disease, influencing clinical expectations and redefining what healthcare should provide. We’re essentially participating in a global public health experiment.
However, this phenomenon also reflects real-world deficits. Alternative health options thrive because people seek acknowledgment, control, and connection, especially when public health support feels insufficient. Critiquing misinformation alone won’t halt its spread and could exacerbate marginalization.
When timely testing is inaccessible, private diagnostics can offer clarity and control. Optimization culture flourishes when traditional medicine is perceived as overly cautious or reactive.
The critical question for health systems is not whether to adapt but how. They must remain evidence-based, safe, and equitable while also being attuned to real-world experiences. Failure to do so risks losing market share and moral authority—the ability to define the essence of care.
To navigate health today, one must understand the commercial mechanisms influencing it. The content we consume is curated by an industry with unprecedented access to our bodies, data, and resources, amplifying its potential to impact our self-perception.
Teens in Trial to Limit Social Media Use: A Shift Towards Real-life Interaction
Daniel de la Hoz/Getty Images
A groundbreaking study is exploring the effects of reduced social media usage on teens’ mental health and well-being. While results are not expected until mid-2027, ongoing discussions suggest that some governments might institute bans on social media for teenagers before the outcomes are known.
The merit of such a ban is still up for debate in the courts. Despite limited evidence, Australia has introduced regulations for minors under 16, and the UK government is considering similar measures.
This trial prioritizes young people’s voices by involving them in the planning process. Historically, children and adolescents have been excluded from critical discussions concerning social media design and management.
“Involving kids is crucial,” states Pete Etchells from Bath Spa University, UK, who is not directly involved in the study.
“There is ample evidence pointing to the potential harms of social media on young users, some of which can be severe,” notes Amy Orben, co-leader of the trial, emphasizing the uncertainty regarding the broader impact of social media time.
To obtain clearer answers, large-scale studies are necessary. The IRL trial takes place in Bradford, England, aiming to recruit around 4,000 participants aged 12 to 15 across 10 schools. A bespoke app will be used to monitor social media engagement.
Half of the participants will face specific time limits on certain apps like TikTok, Instagram, and YouTube, with no restrictions on messaging apps like WhatsApp. “Total usage will be capped at one hour a day, with a curfew from 9 PM to 7 AM,” explains Dan Lewar from the Bradford Health Data Science Center, who co-leads the trial. This is significant, considering that the average social media usage for this age group is about three hours daily.
Importantly, participants will be randomized by grade level, allowing 8th graders to serve as the control group while 9th graders undergo restrictions. The aim is to create similar circumstances for both groups. “If a child’s social media is restricted, but their friends are active online post-curfew, they may feel excluded,” Orben explains.
Lewar emphasizes that the trial was designed collaboratively with teens. “They opposed a blanket ban,” he notes.
The comprehensive study will span six weeks around October, with preliminary results anticipated in mid-2027.
Orben emphasizes that this trial will yield more precise data on teenage social media habits through app monitoring rather than relying on self-reported information. The team will also gather data on anxiety, sleep quality, socializing, happiness, body image, school absenteeism, and experiences of bullying.
Etchells asserts the necessity of understanding whether restrictions or bans are beneficial or detrimental to youth. “The honest answer is we don’t know. That’s why research like this is critical.”
This initiative is welcomed due to the absence of high-quality studies in this area. A recent report from the UK Department for Science, Innovation, and Technology highlighted the need for quality causal evidence linking young people’s mental health to digital technology use, especially concerning social media, smartphones, and AI chatbots.
As stated by Margarita Panayiotou from the University of Manchester, engaging with youth is essential in social media research. Her findings show that teens often find ways to circumvent outright bans, making testing restrictions a more viable option. This approach may also be more ethical, as the harm caused by a ban is not yet understood.
“Teens view social media as a space for self-discovery,” says Panayiotou, highlighting concerns about platform distrust, feelings of loss of control, and unintentional overuse. They also report struggles with online judgment, body comparisons, and cyberbullying.
According to Etchells and Panayiotou, the primary challenge for governments is to compel tech companies to ensure safer social media environments for youth.
The Online Safety Act 2023 (OSA) mandates that technology firms like TikTok, Facebook, WhatsApp, and Instagram (owned by Meta), as well as Google (which owns YouTube), enhance user safety. “Effective enforcement of OSA could address many existing issues,” asserts Etchells.
Humpback Whales Collaborate to Catch Fish Using Bubbles
Jen Dickey/North Coast Cetacean Society
Innovative foraging behaviors are rapidly spreading among humpback whales in the fjords of western Canada, showcasing how cultural knowledge contributes to the survival of marine populations.
Bubble net feeding is a coordinated hunting method where humpback whales expel bubbles to encircle fish, then all rise simultaneously to feed.
According to Ellen Garland from the University of St. Andrews, “This is a collaborative activity characterized by a high degree of coordination and division of labor.”
This remarkable behavior has been observed for decades among humpback whales (Megaptera novaengliae) in Alaskan waters, with recent observations detailing their activities in the northeastern Pacific off Canada’s coast.
However, determining whether such complex behaviors stem from social learning or independent discovery among individuals remains a challenge for researchers.
In a comprehensive study, Edyn O’Mahony and a team from the University of St. Andrews analyzed field observation data from 2004 to 2023, focusing on 526 individuals in British Columbia’s Kitimat Fjord System, part of Gitga’at First Nation Territory.
Using distinct images of each whale’s tail fin, researchers identified 254 individuals engaging in bubble net feeding, with approximately 90% of these activities occurring in a cooperative setting.
This behavior surged post-2014, aligned with a significant marine heatwave in the Northeast Pacific that diminished prey availability.
“As heatwaves decrease prey availability, the whales’ adaptability in their feeding techniques is crucial for maintaining their caloric intake,” stated O’Mahony.
Whales are more likely to adopt bubble net feeding when they interact with individuals already using this technique. While bubble net feeding likely spread to the region from migrating whales, the current prevalence indicates stable groups or influential individuals spreading this knowledge through local social networks.
“After several years post-heatwave, we observe that whales previously not participating in bubble net feeding are now present in this area,” O’Mahony added.
The ability of humpback whales to share knowledge within social groups could be vital for their survival, implying that our understanding of their culture is essential for conservation efforts.
According to Ted Cheeseman, co-founder of the citizen science platform Happywhale, who did not participate in the study, “The key question is not just about the number of whales remaining but also whether the social behaviors crucial for population cohesion are restored.”
Join Dr. Russell Arnott on an Arctic Cruise to Svalbard, Norway
Experience an unforgettable ocean expedition to the North Pole with marine biologist Russell Arnott.
Instagram alerts that accounts for users under 16 will be terminated
Stringer/AFP (via Getty Images)
Australia’s groundbreaking social media restrictions on users under 16 have officially started, unveiling some contentious issues from the inaugural day of the new law. Notably, some minors managed to sidestep age verification measures intended to prevent them from accessing their accounts.
This initiative has garnered backing from numerous parents who hope it will mitigate online harassment, promote outdoor activities, and lessen exposure to inappropriate material. However, critics argue that the ban may be ineffective or even counterproductive, as highlighted by a variety of satirical memes.
Andrew Hammond, associated with KJR, a consultancy in Canberra where he oversaw age verification initiatives for the Australian government, is keenly observing how the current situation evolves. He mentioned having spoken to several parents of children covered by the ban, none of whom had lost access to their accounts yet. “Some have reported they circumvented it or haven’t yet been prompted,” Hammond stated, though he anticipates more accounts will be disabled next week.
Meta, the parent company of Instagram and Facebook, has initiated account removals about a week ago. A spokesperson affirmed, “As of today, we have disabled all accounts confirmed to be under 16.” They confirmed, “As the social media ban in Australia takes effect, we will preclude access to Instagram, Threads, and Facebook for teenagers known to be under this age and will restrict newcomers under 16 from setting up accounts.”
While Meta did not disclose the specific number of accounts terminated, a representative referred to earlier data indicating that approximately 150,000 users aged 13 to 15 are active on Facebook, and around 350,000 on Instagram in Australia. This implies that at least half a million accounts belonging to young Australians have been deleted on these two platforms alone.
The company stated its dedication to fulfilling its legal responsibilities, yet many concerns voiced by community organizations and parents have already manifested on the first day of the ban. These include risk of isolating vulnerable youth from supportive online communities, nudging them towards lesser-regulated apps and web areas, irregular age verification practices, and minimal concern for compliance among numerous teenagers and their parents, according to the spokesperson.
Mr. Hammond raised further questions, particularly regarding the status of minors under 16 who are vacationing or studying in Australia. The government has clarified that this regulation applies equally to visiting minors. While Australian accounts have been deleted, Mr. Hammond suspects that visitors’ accounts may simply be momentarily suspended. “It’s been merely a few hours since the ban was enacted, so there remains substantial uncertainty about its implementation,” he stated.
Australia and other nations are closely monitoring the repercussions as the law is fully enforced. “We will soon discover how attached minors under 16 are to social media and the actual situation that unfolds,” he said. He speculated that perhaps “they will venture outside to play sports.” Nonetheless, he warned, “if their lives are deeply intertwined with it, we may witness a plethora of attempts to evade these restrictions.”
Australia will restrict social media use for individuals under 16 starting December 10th.
Mick Tsikas/Australian Associated Press/Alamy
A historic initiative to prohibit all children under 16 from accessing social media is about to unfold in Australia, but teens are already pushing back.
Initially announced last November, this prohibition, proposed by Australian Prime Minister Anthony Albanese, will commence on December 10th. On this date, all underaged users of platforms like Instagram, Facebook, TikTok, YouTube, and Snapchat will have their accounts removed.
Companies operating social media platforms may incur fines up to A$49.5 million (£25 million) if they do not comply by expelling underage users. Nonetheless, neither parents nor children face penalties.
This regulation is garnering global attention. The European Commission is considering a similar rule. So far, discussions have centered on implementation methods, potential age verification technologies, and the possible adverse effects on teens who depend on social media to engage with their peers.
As the deadline approaches, teens preparations are underway to defy these restrictions. A significant illustration is of two 15-year-old boys from New South Wales, Noah Jones and Macy Neyland, who are challenging the social media ban in the nation’s highest court.
“The truth is, kids have been devising ways to bypass this ban for months, but the media is only catching on now that the countdown has begun,” Jones remarked.
“I know kids who stash their family’s old devices in lockers at school. They transferred the account to a parent or older sibling years ago and verified it using an adult ID without their parents knowing. We understand algorithms, so we follow groups with older demographics like gardening or walking for those over 50. We engage in professional discussions to avoid detection.”
Jones and Neyland first sought an injunction to postpone the ban but opted instead to present their opposition as a specific constitutional challenge.
On December 4, they secured a crucial victory as the High Court of Australia agreed to hear their case as early as February. Their primary argument contends that the ban imposes an undue burden on their implied freedom of political speech. They argue this policy would compromise “significant zones of expression and engagement in social media interactions for 13- to 15-year-olds.”
Supported by the Digital Freedom Project, led by New South Wales politician John Ruddick, the duo is rallying for their cause. “I’ve got an 11-year-old and a 13-year-old, and they’ve been mentioning for months that it’s a hot topic on the playground. They’re all active on social media, reaping its benefits,” Ruddick shared.
Ruddick noted that children are already brainstorming methods to circumvent the ban, exploring options like virtual private networks (VPNs), new social media platforms, and tactics to outsmart age verification processes.
Katherine Page Jeffrey, a researcher at the University of Sydney, mentioned that the impending ban is starting to feel tangible for teenagers. “Up until now, it seems young people hadn’t quite believed that this was actually happening,” she commented.
She adds that her children have already begun discussing alternatives with peers. Her younger daughter has downloaded another social media app called Yope, which is not listed on the government’s watch list yet, unlike several others like Coverstar and Lemon8 that have been warned to self-regulate.
Lisa Given, a researcher at RMIT University in Melbourne, believes that as children drift to newer, less known social media platforms, parents will struggle to monitor their children’s online activities. She speculated that many parents may even assist their children in passing age verification hurdles.
Susan McLean, a foremost cybersecurity expert in Australia, argued that this situation will lead to a “whack-a-mole” scenario as new apps emerge, kids flock to them, and the government continually adds them to the banned list. She insists that rather than taking social media away from teenagers, governments should compel large companies to rectify algorithms that expose children to inappropriate content.
“The government’s logic is deeply flawed,” she pointed out. “You can’t prohibit a pathway to safety unless you ban all communications platforms for kids.”
McLean shared a poignant quote from a teenager who remarked, “If the aim of this ban is to protect children from harmful adults, why should I have to leave while those harmful adults remain?”
Noah Jones, one of the teen complainants, stated it bluntly: “There’s no greater news source than what you can find in just 10 minutes on Instagram,” he insisted. “Yet, we faced bans while perpetrators went unpunished.”
As Australia readies itself to restrict access to 10 major social media platforms for users under 16, lesser-known companies are targeting the teen demographic, often engaging underage influencers for promotional content.
“With a social media ban on the horizon, I’ve discovered a cool new app we can switch to,” stated one teenage TikTok influencer during a sponsored “collaboration” video on the platform Coverstar.
New social media regulations in Australia will take effect, effectively prohibiting all users under 16 from accessing TikTok, Instagram, Snapchat, YouTube, Reddit, Twitch, Kick, and X starting December 10.
It remains uncertain how effective this ban will be, as numerous young users may attempt to bypass it. Some are actively seeking alternative social media platforms.
Sign up: AU breaking news email
Alongside Coverstar, other lesser-known apps like Lemon8 and Yope have recently surged in popularity, currently sitting at the top two spots in Apple’s lifestyle category in Australia.
The government has stated that the list of banned apps is “dynamic,” meaning additional platforms may be added over time. Experts have voiced concerns that this initiative might lead to a game of “whack-a-mole,” pushing children and teens into less visible corners of the internet.
Dr. Catherine Page-Jeffrey, a specialist in digital media and technology at the University of Sydney, remarked, “This legislation may inadvertently create more dangers for young people. As they migrate to less regulated platforms, they might become more secretive about their social media activities, making them less likely to report troubling content or harmful experiences to their parents.”
Here’s what we know about some of the apps that kids are opting for.
Coverstar
Coverstar, a video-sharing app based in the U.S., identifies itself as “a new social app for Generation Alpha that emphasizes creativity, utilizes AI, and is deemed safer than TikTok.” Notably, it is not subject to the social media ban and currently holds the 45th position in Apple’s Australian download rankings.
A screenshot from Yope reveals that the Guardian was able to set up an account for a fictitious four-year-old named Child Babyface without needing parental consent. Photo: Yope
Children as young as 4 can use this platform to livestream, post videos, and comment. For users under 13, the app requires them to record themselves stating, “My name is ____. I give you permission to use Coverstar,” which the app then verifies. Adults are also permitted to create accounts, post content, and engage in comments.
Similar to TikTok and Instagram, users can spend real money on virtual “gifts” for creators during live streams. Coverstar also offers a “premium” subscription featuring additional functionalities.
The app highlights its absence of direct messaging, adherence to an anti-bullying policy, and constant monitoring by AI and human moderators as key safety measures.
Dr. Jennifer Beckett, an authority on online governance and social media moderation at the University of Melbourne, raised concerns regarding Coverstar’s emphasis on AI: “While AI use is indeed promising, there are significant limitations. It’s not adept at understanding nuance or context, which is why human oversight is necessary. The critical question is: how many human moderators are there?”
Coverstar has been reached for comments.
Lemon8
Lemon8, a photo and video sharing platform reminiscent of Instagram and owned by TikTok’s parent company, ByteDance, has experienced a notable rise in user engagement recently.
Users can connect their TikTok accounts to easily transfer content and follow their favorite TikTok creators with a single click.
However, on Tuesday, Australian eSafety Commissioner Julie Inman-Grant revealed that her office has advised Lemon8 to conduct a self-assessment to ascertain if it falls under the new regulations.
Yope
With only 1,400 reviews on the Apple App Store, Yope has emerged as a “friends-only private photo messaging app” that is positioned as an alternative to Snapchat after the ban.
Bahram Ismailau, co-founder and CEO of Yope, described the company as “a small team dedicated to creating the ideal environment for teenagers to share images with friends.”
Similar to Lemon8, Australia’s eSafety Commissioner also reached out to Yope, advising a self-assessment. Ismailau informed the Guardian that he had not received any communication but is “prepared to publicly express our overall eSafety policy concerning age-restricted social media platforms.”
He claimed that after conducting a self-assessment, Yope determines it fully meets the law’s exemption for apps designed solely for messaging, email, video calls, and voice calls.
Australian government adds Reddit and Kick to social media ban for under-16s – video
“Yope functions as a private photo messenger devoid of public content,” asserted Ismailau. “It’s comparable in security to iMessage or WhatsApp.”
According to Yope’s website, the app is designed for users aged 13 and above, with those between 13 and 18 required to engage a parent or guardian. However, the Guardian successfully created an account for a fictitious four-year-old named Child Babyface without needing parental consent.
A mobile number is mandatory for account creation.
Ismailau did not address inquiries about under-13 accounts directly but confirmed that plans are underway to update the privacy policy and terms of service to better reflect the app’s actual usage and intended audience.
Red Note
The Chinese app Red Note, also referred to as Xiaohongshu, attracted American users when TikTok faced a temporary ban in the U.S. earlier this year.
Beckett noted that the app might provide a safe space, considering that “Social media is heavily regulated in China, which is reflected in the content requiring moderation.”
“Given TikTok’s previous issues with pro-anorexia content, it’s clear that the platform has faced its own challenges,” she added.
Nonetheless, cybersecurity experts highlight that the app collects extensive personal information and could be legally obligated to share it with third parties, including the Chinese government.
Despite the increasing number of restricted social media services, specialists assert that governments are underestimating children’s eagerness to engage with social media and their resourcefulness in doing so.
“We often overlook the intelligence of young people,” Beckett remarked. “They are truly adept at finding ways to navigate restrictions.”
Anecdotal evidence suggests that some kids are even exploring website builders to create their own forums and chat rooms; alternatives include using shared Google Docs for communication.
“They will find ways to circumvent these restrictions,” Beckett asserted. “They will be clever about it.”
YouTube will fall under the federal government’s ban on social media for users under 16, but its parent company Google has stated that the law “fails to ensure teens’ safety online” and “misunderstands” the way young people engage with the internet.
Communications Minister Annika Wells responded by emphasizing that YouTube must maintain a safe platform, describing Google’s concerns as “absolutely bizarre.”
In a related development, Guardian Australia has reported that Lemon8, a recently popular social media app not affected by the ban, will implement a restriction of users to those over 16 starting next week. The eSafety Commissioner has previously indicated that the app will be closely scrutinized for any potential bans.
Before Mr. Wells’ address at the National Press Club on Wednesday, Google announced it would start signing out minor users from its platform on December 10. However, the company cautioned that this might result in children and their parents losing access to safety features.
Initially, Google opposed the inclusion of YouTube, which had been omitted from the framework, in the ban and hinted it might pursue legal action. Nevertheless, the statement released on Wednesday did not provide further details on that front, and Google officials did not offer any comments.
Rachel Lord, Google’s senior manager of Australian public policy, stated in a blog post that users under 16 could view YouTube videos while logged out, but they would lose access to features that require signed-in accounts, such as “subscriptions, playlists, likes,” and standard health settings like “breaks” and bedtime reminders.
Additionally, the company warned that parents “will no longer be able to manage their teens’ or children’s accounts on YouTube,” including blocking certain channels in content settings.
Mr. Lord commented, “This rushed regulation misunderstands our platform and how young Australians use it. Most importantly, this law does not fulfill its promise of making children safer online; rather, it will render Australian children less safe on YouTube.”
While Lord did not address potential legal actions, they expressed commitment to finding more effective methods to safeguard children online.
Wells mentioned at the National Press Club that parents could adjust controls and safety settings on YouTube Kids, which is not included in the ban.
“It seems odd that YouTube frequently reminds us how unsafe the platform is when logged out. If YouTube asserts that its content is unsuitable for age-restricted users, it must address that issue,” she remarked.
Annika Wells will address the National Press Club on Wednesday. Photo: Mick Tsikas/AAP
Mr. Wells also acknowledged that the implementation of the government’s under-16 social media ban could take “days or even weeks” to properly enforce.
“While we understand it won’t be perfect immediately, we are committed to refining our platform,” Wells stated.
Wells commended the advocacy of families affected by online bullying or mental health crises, asserting that the amendments would “shield Generation Alpha from the peril of predatory algorithms.” She suggested that social media platforms intentionally target teens to maximize engagement and profits.
“These companies hold significant power, and we are prepared to reclaim that authority for the welfare of young Australians beginning December 10,” asserted Mr. Wells.
Sign up: AU breaking news email
Meta has informed users of Facebook, Instagram, and Threads, along with Snapchat, about forthcoming changes. Upon reaching out to Guardian Australia, a Reddit spokesperson mentioned that they had no new information. Meanwhile, X, TikTok, YouTube, and Kick have not publicly clarified their compliance with the law nor responded to inquiries.
Platforms that do not take appropriate measures to exclude users under 16 may incur fines of up to $50 million. Concerns have been raised about the timing and execution of the ban, including questions about the age verification process, and at least one legal challenge is in progress.
The government believes it is essential to signal to parents and children the importance of avoiding social media, even if some minors may manage to bypass the restrictions.
Wells explained that it would take time to impose $50 million fines on tech companies, noting that the e-safety commissioner will request information from platforms about their efforts to exclude underage users starting December 11, and will scrutinize data on a monthly basis.
At a press conference in Adelaide on Tuesday, Mr. Wells anticipated that additional platforms would be included in the under-16 ban if children were to migrate to sites not currently on the list.
She advised the media to “stay tuned” for updates regarding the Instagram-like app Lemon8, which is not subject to the ban. Guardian Australia understands that the eSafety Commission has communicated with Lemon8, owned by TikTok’s parent company, ByteDance, indicating that the platform will be monitored for potential future inclusion once the plan is enacted.
Guardian Australia can confirm that Lemon8 will restrict its user base to those over 16 starting December 10.
“If platforms like LinkedIn become hubs of online bullying, targeting 13- to 16-year-olds and affecting their mental and physical health, we will address that issue,” Wells stated on Tuesday.
“That’s why all platforms are paying attention. We need to be prompt and flexible.”
Australian crisis support services lifeline is available at 13 11 14. In the UK and Ireland, you can reach Samaritan via freephone 116 123 or by email at jo@samaritans.org or jo@samaritans.ie. In the US, contact the 988 Lifeline for suicide and crisis at 988 or via chat at 988lifeline.org. For further international helplines, visit: befrienders.org
Instagram’s method for confirming if a user surpasses 16 years old is fairly straightforward, especially when the individual is evidently an adult. However, what occurs if a 13-year-old attempts to alter their birth date to seem older?
In November, Meta informed Instagram and Facebook users whose birth dates are registered as under 16 that their accounts would be disabled as part of Australia’s prohibition on social media use for children. This rule will take effect on December 10, with Meta announcing that access for users younger than 16 will start being revoked from December 4.
Subscribe: AU breaking news email
Dummy social media accounts were created on phones as part of Guardian Australia’s investigation into what content different age groups access on the platform.
Instagram notification sent to a test account with an age set to 15. Photo: Instagram/Meta
One account was created on Instagram with the age set at 15 to observe the impact of the social media ban for users under 16. Instagram later stated: “Under Australian law, you will soon be unable to use social media until you turn 16.”
“You cannot use an Instagram account until you’re 16, which means your profile will not be visible to you or anyone else until that time.”
“We’ll inform you when you can access Instagram again.”
Notice informing that test account users will lose access due to the Australian social media ban. Photo: Instagram/Meta
The account was then presented with two choices: either download account data and deactivate until the user is 16, or verify their date of birth.
Instagram notification sent to test account set to age 15 regarding date of birth review options. Photo: Instagram/Meta
The second option enables users to submit a “video selfie” to validate that the account holder is older than 16. The app activated the front-facing camera and prompted the adult test user, distinguished by a thick beard, to shift their head side to side. This resembles the authentication method used for face unlock on smartphones.
Explanation on how the “Video Selfie” feature estimates the user’s age. Photo: Instagram/Meta
The notification indicated that the verification process usually takes 1-2 minutes, but may extend up to 48 hours.
Notification sent to the test account following the date of birth verification request. Photo: Instagram/Meta
The app promptly indicated that accounts created by adult test users were recognized as 16 years or older.
A notification confirming the user’s date of birth was updated by Instagram. Photo: Instagram/Meta
In another test, a 13-year-old boy created a fresh account on his mobile device, avoiding installing Instagram and using a birth date that clearly suggested he was under 16. There was no immediate alert regarding the upcoming social media ban.
When the child attempted to change their date of birth to reflect an adult age, the same video selfie facial age estimation process was performed.
Within a minute, it replied, “We couldn’t verify your age,” and requested a government-issued ID for date of birth verification.
Facial age testing during the Age Assurance Trial revealed that individuals over 21 were generally much less prone to being misidentified as under 16. Meanwhile, those closer to 16 years of age and minorities experienced higher rates of false positives and negatives.
Meta may have already assessed users who haven’t been notified as 18 years or older, utilizing data such as birth date, account lifespan, and other user activity.
A Meta representative mentioned that the experiment demonstrated that the process functions as expected, with “adult users being capable of verifying their age and proceeding, while users under 16 undergo an age check when attempting to alter their birth date.”
“That said, we must also recognize the findings of the Age Assurance Technical Examination, which highlights the specific difficulties of age verification at the 16-year threshold and anticipates that the method may occasionally be imperfect,” the spokesperson added.
Last month, Communications Minister Annika Wells acknowledged the potential challenges confronting the implementation of the ban.
“We recognize that this law isn’t flawless, but it is essential to ensure that there are no gaps,” she stated.
Meta collaborates with Yoti for age verification services. The company asserts on its website that facial images will be destroyed once the verification process concludes.
The ban impacts Meta’s Facebook, Instagram, and Threads platforms, as well as others such as Kick, Reddit, Snapchat, TikTok, Twitch, X, and YouTube.
The European Parliament has proposed that children under the age of 16 should be prohibited from using social media unless their parents grant permission.
On Wednesday, MEPs overwhelmingly approved a resolution concerning age restrictions. While this resolution isn’t legally binding, the urgency for European legislation is increasing due to rising concerns about the mental health effects on children from unfettered internet access.
The European Commission, responsible for setting EU laws, is already exploring the option of a social media ban for those under 16 in Australia, anticipated to commence next month.
Commission Chair Ursula von der Leyen indicated in a September speech that she would closely observe the rollout of Australia’s initiative. She condemned “algorithms that exploit children’s vulnerabilities to foster addiction” and stated that parents often feel overwhelmed by “the flood of big tech entering our homes.”
Ms. von der Leyen pledged to establish an expert panel by the year’s end to provide guidance on effectively safeguarding children.
There’s increasing interest in limiting children’s access to social media and smartphones. A report commissioned by French President Emmanuel Macron last year recommended that children should not have smartphones until age 13 and should refrain from using social media platforms like TikTok, Instagram, and Snapchat until they turn 18.
Danish Social Democratic Party lawmaker Christel Schaldemose, who authored the resolution, stated that it’s essential for politicians to act in protecting children. “This is not solely a parental issue. Society must take responsibility to ensure that platforms are safe environments for minors, but only if they are above a specified age.”
Her report advocates for the automatic disabling of addictive elements like infinite scrolling, auto-playing videos, excessive notifications, and rewards for frequent use when minors access online platforms.
The resolution emphasizes that “addictive design features are typically integral to the business models of platforms, particularly social media.” An early draft of Schaldemose’s report referenced a study indicating that one in four children and young people exhibit “problematic” or “dysfunctional” smartphone use, resembling addictive behavior. It states that children should be 16 before accessing social media, although parents can consent from age 13.
The White House has urged the EU to retract its digital regulations, and supporters of the social media ban have contextualized their votes accordingly. U.S. Commerce Secretary Howard Lutnick mentioned at a meeting in Brussels that EU regulations concerning tech companies should be re-evaluated in exchange for reduced U.S. tariffs on steel and aluminum.
Stéphanie Yoncourtin, a French lawmaker from Macron’s party, responded to Lutnick’s visit, asserting that Europe is not a “regulatory colony.” After the vote, she remarked: “Our digital laws are not negotiable. We will not compromise child protections just because a foreign billionaire or tech giant attempts to influence us.”
The EU is already committed to shielding internet users from online dangers like misinformation, cyberbullying, and unlawful content through the Digital Services Act. However, the resolution highlights existing gaps in the law that need to be addressed to better protect children from online risks, such as addictive design features and financial incentives to become influencers.
Schaldemose acknowledged that the law, of which she is a co-author, is robust, “but we can enhance it further because we remain less specific and less defined, particularly in regards to addictive design features and harmful dark pattern practices.”
Dark patterns refer to design elements in apps and websites that manipulate user decisions, such as countdown timers pushing purchases or persistent requests to enable location tracking or notifications.
Schaldemose’s resolution was endorsed by 483 members, while 92 voted against it and 86 abstained.
Eurosceptic lawmakers criticized the initiative, arguing that it would overreach if the EU imposes a ban on children’s access to social media. “Decisions about children’s online access should be made as closely as possible to families in member states, not in Brussels,” stated Kosma Złotowski, a Polish member of the European Conservative and Reform Group.
The resolution was adopted just a week after the committee announced a delay in overhauling the Artificial Intelligence Act and other digital regulations that aim to relax rules for businesses under the guise of “simplification.”
Schaldemose acknowledged the importance of not overwhelming the legislative system, but added, “There is a collective will to do more regarding children’s protection in the EU.”
New guidelines have urged social media platforms to curtail internet “pile-ons” to better safeguard women and girls online.
Ofcom, Britain’s communications regulator, implemented guidance on Tuesday aimed at tackling misogynistic abuse, coercive control, and the non-consensual sharing of intimate images, with a focus on minimizing online harassment of women.
The measures imply that tech companies will limit the number of responses to posts on platforms like X, a strategy Ofcom believes will lessen incidents where individual users are inundated with abusive responses.
Additional measures proposed by Ofcom include utilizing databases of images to prevent the non-consensual sharing of intimate photos—often referred to as ‘revenge porn’.
The regulator advocates for “hash matching” technology that helps platforms remove disputed images. This system cross-references user-reported images or videos with a database of illegal content, transforming them into “hashes” or digital identifiers, enabling the identification and removal of harmful images.
These recommendations were put forth under the Online Safety Act (OSA), a significant law designed to shield children and adults from harmful online content.
While the advice is not obligatory, Ofcom is urging social media companies to follow it, announcing plans to release a report in 2027 assessing individual platforms’ responses to the guidelines.
The regulator indicated that the OSA could be reinforced if the recommendations are not acted upon or prove ineffective.
“If their actions fall short, we will consider formally advising the government on necessary enhancements to online safety laws,” Ofcom stated.
Dame Melanie Dawes, Ofcom’s chief executive, has encountered “shocking” reports of online abuse directed at women and girls.
“We are sending a definitive message to tech companies to adhere to practical industry guidance that aims to protect women from the genuine online threats they face today,” Dawes stated. “With ongoing support from our campaigners, advocacy groups, and expert partners, we will hold companies accountable and establish new benchmarks for online safety for women and girls in the UK.”
Ofcom’s other recommendations suggest implementing prompts to reconsider posting abusive content, instituting “time-outs” for frequent offenders, and preventing misogynistic users from generating ad revenue related to their posts. It will also allow users to swiftly block or mute several accounts at once.
These recommendations conclude a process that started in February, when Ofcom conducted a consultation that included suggestions for hash matching. However, more than a dozen guidelines, like establishing “rate limits” on posts, are brand new.
Internet Matters, a nonprofit organization dedicated to children’s online safety, argued that governments should make the guidance mandatory, cautioning that many tech companies might overlook it. Ofcom is considering whether to enforce hash matching recommendations.
Rachel Huggins, co-chief executive of Internet Matters, remarked: “We know many companies will disregard this guidance simply because it is not legally binding, leading to continued unacceptable levels of online harm faced by women and girls today.”
Roblox maintains that Australia’s forthcoming social media restrictions for users under 16 should not extend to its platform, as it rolls out a new age verification feature designed to block minors from communicating with unknown adults.
The feature, which is being launched first in Australia, allows users to self-estimate their age using Persona age estimation technology built into the Roblox app. This utilizes the device’s camera to analyze facial features and provide a live age assessment.
This feature will become compulsory in Australia, the Netherlands, and New Zealand starting the first week of December, with plans to expand to other markets in early January.
After completing the age verification, users will be categorized into one of six age groups: under 9, 9-12, 13-15, 16-17, 18-20, or 21 and older.
Roblox has stated that users within each age category will only be able to communicate with peers in their respective groups or similarly aged groups.
Sign up: AU breaking news email
These changes were initially proposed in September and received positive feedback from Australia’s eSafety Commissioner, who has been in discussions with Roblox for several months regarding safety concerns on the platform, labeling this as a step forward in enhancing safety measures.
A recent Guardian Australia investigation revealed a week’s worth of virtual harassment and violence experienced by users who had set their profiles as eight years old while on Roblox.
Regulatory pressure is mounting for Roblox to be included in Australia’s under-16 social media ban, set to be implemented on December 10. Although there are exceptions for gaming platforms, Julie Inman-Grant stated earlier this month that eSafety agencies are reviewing chat functions and messaging in games.
“If online gameplay is the primary or sole purpose, would kids still utilize the messaging feature for communication if it were removed? Probably not,” she asserted.
During a discussion with Australian reporters regarding these impending changes, Roblox’s chief safety officer, Matt Kaufman, characterized Roblox as an “immersive gaming platform.” He explained, “I view games as a framework for social interaction. The essence lies in bringing people together and spending time with one another.”
When asked if this suggests Roblox should be classified as a social media platform subject to the ban, Kaufman responded that Roblox considers social media as a space where individuals post content to a feed for others to view.
“People return to look at the feed, which fosters a fear of missing out,” he elaborated. “It feels like a popularity contest that encapsulates social media. In contrast, Roblox is akin to two friends playing a game after school together. That’s not social media.”
“Therefore, we don’t believe that Australia’s domestic social media regulations apply to Roblox.”
When questioned if the new features were introduced to avoid being encompassed in the ban, Kaufman stated that the company is engaged in “constructive dialogue” with regulators and that these updates showcase the largest instance of a platform utilizing age verification across its entire user base.
Persona, the age verification company partnering with Roblox, Participating in Australian Age Guarantee Technology Trial. They reported a false positive rate of 61.11% for 15-year-olds identified as 16 years old and 44.25% for 14-year-olds.
Kaufman explained that the technology would likely be accurate within a year or two and that users who disagree with the assessment could correct it using a government ID or parental controls to establish an age. He assured that there are “strict requirements” for data deletion after age verification. Roblox states that ID images will be retained for 30 days for purposes such as fraud detection and then erased.
Users who opt not to participate in the age verification will still have access to Roblox, but they will be unable to use features like chat.
More than 150 million people globally engage with Roblox every day across 180 countries, including Australia. According to Kaufman, two-thirds of users are aged 13 and above.
Molly Russell’s father, the British teenager who tragically took her life after encountering harmful online material, has expressed his lack of confidence in efforts to secure a safer internet for children. He is advocating for a leadership change at Britain’s communications regulatory body.
Ian Russell, whose daughter Molly was only 14 when she died in 2017, criticized Ofcom for its “repeated” failure to grasp the urgency of safeguarding under-18s online and for not enforcing new digital regulations effectively.
“I’ve lost faith in Ofcom’s current leadership,” he shared with the Guardian. “They have consistently shown a lack of urgency regarding this mission and have not been willing to use their authority adequately.”
Mr. Russell’s remarks coincided with a letter from technology secretary Liz Kendall to Ofcom, expressing her “deep concern” over the gradual progress of the Online Safety Act (OSA), a groundbreaking law that lays out safety regulations for social media, search engines, and video platforms.
After his daughter’s death, Mr. Russell became a prominent advocate for internet safety and raised flags with Ofcom chief executive Melanie Dawes last year regarding online suicide forums accessible to UK users.
Ofcom opened an investigation into these forums after acquiring new regulatory authority under the OSA, and the site voluntarily restricted access to UK users.
However, Mr. Russell noted that the investigation seemed to be “stalled” until regulators intensified their scrutiny this month when it was revealed that UK users could still access the forums via undiscovered “mirror sites.”
Molly Russell passed away in 2017. Photo: P.A.
“If Ofcom can’t manage something this clear-cut, it raises questions about their competence in tackling other issues,” Mr. Russell stated.
In response, Ofcom assured Mr. Russell that they were continuously monitoring geo-blocked sites and indicated that a new mirror site had only recently come to their attention.
Mr. Russell voiced his agreement with Mr. Kendall’s frustrations over the slow implementation of additional components of the OSA, particularly stricter regulations for the most influential online platforms. Ofcom attributed the delays to a legal challenge from the Wikimedia Foundation, the organization that supports Wikipedia.
The regulator emphasized its “utmost respect” for bereaved families and cited achievements under its stewardship, such as initiating age verification on pornography websites and combating child sexual abuse content.
“We are working diligently to push technology firms to ensure safer online experiences for children and adults in the UK. While progress is ongoing, meaningful changes are occurring,” a spokesperson commented.
The Molly Rose Foundation, established by Molly’s family, has reached out to the UK government urging ministers to broaden legal mandates for public servant transparency to include tech companies.
In their letter, they requested Victims’ Minister Alex Davies-Jones to expand the Public Powers (Accountability) Bill, which introduces a “duty of honesty” for public officials.
This bill was prompted by critiques regarding the police’s evidence handling during the Hillsborough investigation, mandating that public entities proactively assist inquiries, including those by coroner’s courts, without safeguarding their own interests.
The foundation believes that imposing similar transparency requirements on companies regulated by the OSA would aid in preserving evidence in cases of deaths possibly linked to social media.
The inquest into Molly’s passing was postponed due to a conflict surrounding evidence presentation.
“This change fundamentally shifts the dynamic between tech companies and their victims, imposing a requirement for transparency and promptness in legal responses,” the letter asserted.
Recent legislative changes have granted coroners enhanced authority under the OSA to request social media usage evidence from tech companies and prohibit them from destroying sensitive data. However, the letter’s signatories contend that stricter measures are necessary.
More than 40 individuals, including members of Survivors for Online Safety and Meta whistleblower Arturo Bejar, have signed the letter.
A government spokesperson indicated that the legal adjustments empower coroners to request further data from tech firms.
“The Online Safety Act will aid coroners in their inquests and assist families in seeking the truth by mandating companies to fully disclose data when there’s a suspected link between a child’s death and social media use,” a spokesperson stated.
“As pledged in our manifesto, we’ve strengthened this by equipping coroners with the authority to mandate data preservation for inquest support. We are committed to taking action and collaborating with families and advocates to ensure protection for families and children.”
Authorities warn that misinformation on social media is pushing men to NHS clinics for unnecessary testosterone treatments, exacerbating already strained waiting lists.
Testosterone therapy is a prescription-only treatment recommended under national guidelines for men who display clinically verified deficiencies, validated through symptoms or consistent blood tests.
However, a surge of viral content on platforms like TikTok and Instagram is promoting blood tests as a means to receive testosterone as a lifestyle supplement, marketing it as a cure for issues like low energy, diminished focus, and decreased libido.
Medical professionals warn that taking unwarranted testosterone can inhibit natural hormone production, result in infertility, and elevate risks for blood clots, heart disease, and mood disorders.
The increasing demand for online consultations is becoming evident in medical facilities.
Professor Channa Jayasena from Imperial College London and chair of the Endocrine Society Andrology Network noted that hospital specialists are witnessing a rise in men taking these private blood tests, often promoted through social media, and being inaccurately advised that they require testosterone.
“We consulted with 300 endocrinologists at a national conference, and they all reported seeing patients in these clinics weekly,” he said. “They’re overwhelming our facilities. We previously focused on adrenal conditions and diabetes, and it’s significantly affecting NHS services. We’re left wondering how to manage this situation.”
While advertising prescription medications is illegal in the UK, the Guardian discovered that several TikTok influencers collaborate with private clinics to promote blood tests legally marketed as part of testosterone therapy.
Advocates of testosterone replacement therapy, who boast large followings, receive compensation or incentives from private clinics to promote discount codes and giveaways. Photo: TikTok
Supporters of testosterone replacement therapy, amassing thousands of followers, are incentivized by private clinics to advertise discount offers and promotions to encourage men to assess their testosterone levels and possibly pursue treatment.
One popular post shows a man lifting weights, urging viewers: “Get your testosterone tested… DM me for £20 off.” Another video suggests that a free blood test is available as part of an incentive to “enhance” his performance.
The Guardian notified the Advertising Standards Authority about these posts for potentially violating regulations regarding prescription drugs, triggering an investigation by the oversight body.
Jayasena stated, “I recently attended the National Education Course for the Next Generation of Endocrine Consultants, where many expressed concerns about reproductive health and the escalating trend of men being pushed to boost their testosterone levels.”
He added: “Beyond just influencers, this issue is significant. Healthcare professionals are encountering patients who come in for private blood tests, possibly arranged through influencers, and being incorrectly advised by inexperienced medical personnel that they should commence testosterone therapy. This guidance is fundamentally flawed.”
In private clinics, the initial year of Testosterone Replacement Therapy (TRT) ranges from £1,800 to £2,200, covering medication, monitoring, and consultations.
Originally a specialized treatment for a limited group of men with clinically diagnosed hormone deficiencies, TRT is now increasingly viewed as a lifestyle or “performance enhancement” option. Online clinics are also offering home blood tests and subscription services, making such treatments more easily accessible outside conventional healthcare routes.
In private clinics, the initial year of comprehensive testosterone replacement therapy costs approximately £1,800 to £2,200. Photo: Ian Dewar/Alamy
These messages imply that diminished motivation, exhaustion, and aging signify “low T,” leading more men to seek testing and treatment, despite medical advice restricting TRT to individuals with confirmed hormonal deficiencies.
Professor Jayasena remarked: “There are specific clinical protocols dictating who should or shouldn’t consider testosterone therapy. Some symptoms, like erectile dysfunction, undeniably correlate with low testosterone, whereas others, like muscle mass or feeling down, do not. A man might express dissatisfaction with his muscle tone and be advised to get tested, yet evidence supporting the necessity of such testing remains scarce.”
“What’s particularly alarming is that some clinics are now administering testosterone to men with normal testosterone levels. Research shows there’s no benefit to testosterone levels exceeding 12 nmol/L. I have also received reports of clinics providing testosterone to individuals under 18, a significant demographic.”
He explained that unnecessary testosterone usage can lead to infertility: “It inhibits testicular function and the hormonal messages from the brain necessary for testicular health, compelling us to combine and administer other drugs to counteract this effect. This is akin to the strategies used by anabolic steroid users.”
Increasing concerns have been raised regarding the federal government’s need to tackle the dangers that children face on the widely-used gaming platform Roblox, following a report by Guardian Australia that highlighted a week of incidents involving virtual sexual harassment and violence.
While role-playing as an 8-year-old girl, the reporter encountered a sexualized avatar and faced cyberbullying, acts of violence, sexual assault, and inappropriate language, despite having parental control settings in place.
From December 10, platforms including Instagram, Snapchat, YouTube, and Kick will be under Australia’s social media ban preventing Australians under 16 from holding social media accounts, yet Roblox will not be included.
Independent councillor Monique Ryan labeled this exclusion as “unexplainable.” She remarked, “Online gaming platforms like Roblox expose children to unlimited gambling, cloned social media apps, and explicit content.”
At a press conference on Wednesday, eSafety Commissioner Julie Inman Grant stated that platforms would be examined based on their “singular and essential purpose.”
“Kids engaging with Roblox currently utilize chat features and messaging for online gameplay,” she noted. “If online gameplay were to vanish, would kids still use the messaging feature? Likely not.”
Sign up: AU breaking news email
“If these platforms start introducing features that align them more with social media companies rather than online gaming ones, we will attempt to intervene.”
According to government regulations, services primarily allowing users to play online games with others are not classified as age-restricted social media platforms.
Nonetheless, some critics believe that this approach is too narrow for a platform that integrates gameplay with social connectivity. Nyusha Shafiabadi, an associate professor of information technology at Australian Catholic University, asserts that Roblox should also fall under the ban.
She highlighted that the platform enables players to create content and communicate with one another. “It functions like a restricted social media platform,” she observed.
Independent MP Nicolette Boere urged the government to rethink its stance. “If the government’s restrictions bar certain apps while leaving platforms like Roblox, which has been called a ‘pedophile hellscape’, unshielded, we will fail to safeguard children and drive them into more perilous and less regulated environments,” she remarked.
Communications minister spokesperson Annika Wells mentioned that excluding Roblox from the teen social media ban does not imply that it is free from accountability under the Online Safety Act.
A representative from eSafety stated, “We can extract crucial safety measures from Roblox that shield children from various harms, including online grooming and sexual coercion.”
eSafety declared that by the year’s end, Roblox will enhance its Age Verification Technology, which restricts adults from contacting children without explicit parental consent and sets accounts to private by default for users under 16.
“Children under 16 who enable chat through age estimation will no longer be permitted to chat with adults. Alongside current protections for those under 13, we will also introduce parental controls allowing parents to disable chat for users between 13 and 15,” the spokesperson elaborated.
Should entities like Roblox not comply with child safety regulations, authorities have enforcement capabilities, including fines of up to $49.5 million.
eSafety stated it will “carefully oversee Roblox’s adherence to these commitments and assess regulatory measures in the case of future infractions.”
Joanna Orlando, an expert on digital wellbeing from Western Sydney University, pointed out that Roblox’s primary safety issues are grooming threats and the increasing monetization of children engaging with “the world’s largest game.”
She mentioned that it is misleading to view it solely as a video game. “It’s far more significant. There are extensive social layers, and a vast array of individuals on that platform,” she observed.
Green Party spokesperson Sarah Hanson-Young criticized the government for “playing whack-a-mole” with the social media ban.
“We want major technology companies to assume responsibility for the safety of children, irrespective of age,” she emphasized.
“We need to strike at these companies where it truly impacts them. That’s part of their business model, and governments hesitate to act.”
Shadow communications minister Melissa Mackintosh also expressed her concerns about the platform. She stated that while Roblox has introduced enhanced safety measures, “parents must remain vigilant to guard their children online.”
“The eSafety Commissioner and the government carry the responsibility to do everything within their power to protect children from the escalating menace posed by online predators,” she said.
A representative from Roblox stated that the platform is “dedicated to pioneering safety through stringent policies that surpass those of other platforms.”
“We utilize AI to scrutinize games for violating content prior to publication, we prohibit users from sharing images or videos in chats, and we implement sophisticated text filters designed to prevent children from disclosing personal information,” they elaborated.
Photos of government IDs belonging to approximately 70,000 global Discord users, a widely used messaging and chat application amongst gamers, might have been exposed following a breach at the firm responsible for conducting age verification procedures.
Along with the ID photos, details such as users’ names, email addresses, other contact information, IP addresses, and interactions with Discord customer support could also have fallen prey to the hackers. The attacker is reportedly demanding a ransom from the company. Fortunately, full credit card information or passwords were not compromised.
The incident was disclosed last week, but news of the potential ID photo leak came to light on Wednesday. A representative from the UK’s Information Commissioner’s Office, which oversees data breaches, stated: “We have received a report from Discord and are assessing the information provided.”
The images in question were submitted by users appealing age-related bans via Discord’s customer service contractors, which is a platform that allows users to communicate through text, voice, and video chat for over a decade.
Some nations, including the UK, mandate age verification for social media and messaging services to protect children. This measure has been in effect in the UK since July under the Online Safety Act. Cybersecurity professionals have cautioned about the potential vulnerability of age verification providers, which may require sensitive government-issued IDs, to hackers aware of the troves of sensitive information.
Discord released a statement acknowledging: “We have recently been made aware of an incident wherein an unauthorized individual accessed one of Discord’s third-party customer service providers. This individual obtained information from a limited number of users who reached out to Discord through our customer support and trust and safety teams… We have identified around 70,000 users with affected accounts globally whose government ID photos might have been disclosed. Our vendors utilized those photos for evaluating age-related appeals.”
Discord requires users seeking to validate their age to upload a photo of their ID along with their Discord username to return to the platform.
Nathan Webb, a principal consultant at the British digital security firm Acumen Cyber, remarked that the breach is “very concerning.”
“Even if age verification is outsourced, organizations must still ensure the proper handling of that data,” he emphasized. “It is crucial for companies to understand that delegating certain functions does not relieve them of their obligation to uphold data protection and security standards.”
While someone in their 70s didn’t serve in World War II, this statement holds truth, as even the oldest Scepter Agerians were born post-war. Yet, a cultural link persists between this demographic and the era of Vera Lynn and the Blitz.
When discussing parents and technology, similar misconceptions arise. The prevailing belief is that social media and the internet are a realm beyond the understanding of parents, prompting calls for national intervention to shield children from tech giants. This month, Australia plans to outline its forthcoming restrictions. However, the parents of today’s teens are increasingly digitally savvy, having grown up in the age of MySpace and Habbo Hotel. Why have we come to think that these individuals can’t comprehend how their kids engage with TikTok and Fortnite?
There are already straightforward methods for managing children’s online access, such as adjusting router configurations or mandating parental approval for app installations. Yet politicians seem to believe these tasks require advanced technical skills, resulting in overly broad restrictions. If you could customize your Facebook profile in college, fine-tuning some settings shouldn’t be beyond reach. Instead of asking everyone to verify their age and identity online, why not trust the judgment of parents?
“
If you customized your Facebook page in university, you should be able to tweak a few settings “
Failing to adapt to generational shifts can lead to broader issues. Like veterans who narrowly focus on historical battles from the past, there’s a risk of misdirecting attention. While lawmakers clamp down on social media, they’re simultaneously rushing to embrace AI technologies that rely on sophisticated language models, which significantly affect today’s youth, leaving educators pondering how to create ChatGPT-proof assignments.
Rather than issuing outright bans, we should facilitate open discussions about emerging technologies, encompassing social media, AI, and their societal implications while engaging families in the conversation.
I cannot recall the exact moment my TikTok feed presented me with a video of a woman cradling her stillborn baby, but I do remember the wave of emotion that hit me. Initially, it resembled the joyous clips of mothers holding their newborns, all wrapped up and snug in blankets, with mothers weeping—just like many in those postnatal clips. However, the true nature of the video became clear when I glanced at the caption: her baby was born at just 23 weeks. I was at 22 weeks pregnant. A mere coincidence.
My social media algorithms seemed to know about my pregnancy even before my family, friends, or doctor did. Within a day, my feed transformed. On both Instagram and TikTok, videos emerged featuring women documenting their journeys as if they were conducting pregnancy tests. I began to “like,” “save,” and “share” these posts, feeding the algorithm and indicating my interest, and it responded with more content. But it didn’t take long for the initial joy to be overtaken by dread.
The algorithm quickly adapted to my deepest fears related to pregnancy, introducing clips about miscarriage stories. In them, women shared their heartbreaking experiences after being told their babies had no heartbeat. Soon, posts detailing complications and horror stories started flooding my feed.
One night, after watching a woman document her painful birthing experience with a stillbirth, I uninstalled the app amidst tears. But I reinstalled it shortly after; work commitments and social habits dictated I should. I attempted to block unwanted content, but my efforts were mostly futile.
On TikTok alone, over 300,000 videos are tagged with “miscarriage,” and another 260,000 are linked under related terms. A specific video titled “Live footage of me finding out I had a miscarriage” has garnered almost 500,000 views, while fewer than 5 million have been dedicated to women giving birth to stillborns.
Had I encountered such content before pregnancy, I might have viewed the widespread sharing of these experiences as essential. I don’t believe individuals sharing these deeply personal moments are in the wrong; for some, these narratives could offer solace. Yet, amid the endless stream of anxiety-inducing content, I couldn’t shake the discomfort of the algorithm prioritizing such overwhelming themes.
“I ‘like,’ ‘save,’ and ‘share’ the content, feeding it into the system and prompting it to keep returning more”…Wheeler while pregnant. Photo by Kathryn Wheeler
When discussing this experience with others who were also pregnant at the same time, I found shared nods of understanding and similar narratives. They too recounted their personalized concoctions of fears, as their algorithms zeroed in on their unique anxieties. Our experiences felt radical as we were bombarded with such harrowing content, expanding the range of what is deemed normal concern. This is what pregnancy and motherhood are like in 2025.
“Some posts are supportive, but others are extreme and troubling. I don’t want to relive that,” remarks 8-month-pregnant Cerel Mukoko. Mukoko primarily engages with this content on Facebook and Instagram but deleted TikTok after becoming overwhelmed. “My eldest son is 4 years old, and during my pregnancy, I stumbled upon upsetting posts. They hit closer to home, and it seems to be spiraling out of control.” She adds that the disturbing graphics in this content are growing increasingly hard to cope with.
As a 35-year-old woman of color, Mukoko noticed specific portrayals of pregnant Black women in this content. A 2024 analysis of NHS data indicated that Black women faced up to six times the rate of severe complications compared to their white counterparts during childbirth. “This wasn’t my direct experience, but it certainly raises questions about my treatment and makes me feel more vigilant during appointments,” she states.
“They truly instill fear in us,” she observes. “You start to wonder: ‘Could this happen to me? Am I part of that unfortunate statistic?’ Given the complications I’ve experienced during this pregnancy, those intrusive thoughts can be quite consuming.”
For Dr. Alice Ashcroft, a 29-year-old researcher and consultant analyzing the impacts of identity, gender, language, and technology, this phenomenon began when she was expecting. “Seeing my pregnancy announcement was difficult.”
This onslaught didn’t cease once she was pregnant. “By the end of my pregnancy, around 36 weeks, I was facing stressful scans. I began noticing links shared by my midwife. I was fully aware that the cookies I’d created (my digital footprint) influenced this feed, which swayed towards apocalyptic themes and severe issues. Now with a 6-month-old, her experience continues to haunt her.
The ability of these algorithms to hone in on our most intimate fears is both unsettling and cruel. “For years, I’ve been convinced that social media reads my mind,” says 36-year-old Jade Asha, who welcomed her second child in January. “For me, it was primarily about body image. I’d see posts of women who were still gym-ready during their 9th month, which made me feel inadequate.”
Navigating motherhood has brought its own set of anxieties for Asha. “My feed is filled with posts stating that breastfeeding is the only valid option, and the comment sections are overloaded with opinions presented as facts.”
Dr. Christina Inge, a Harvard researcher specializing in tech ethics, isn’t surprised by these experiences. “Social media platforms are designed for engagement, and fear is a powerful motivator,” she observes. “Once the algorithm identifies someone who is pregnant or might be, it begins testing content similar to how it handles any user data.”
“For months after my pregnancy ended, my feed morphed into a new set of fears I could potentially face.” Photo: Christian Sinibaldi/Guardian
“This content is not a glitch; it’s about engagement, and engagement equals revenue,” Inge continues. “Fear-based content keeps users hooked, creating a sense of urgency to continue watching, even when it’s distressing. Despite the growing psychological toll, these platforms profit.”
The negative impact of social media on pregnant women has been a subject of extensive research. A systematic review examining social media use during pregnancy highlights both benefits and challenges. While it offers peer guidance and support, it also concludes that “issues such as misinformation, anxiety, and excessive use persist.” Dr. Nida Aftab, an obstetrician and the review’s author, emphasizes the critical role healthcare professionals should play in guiding women towards healthier digital habits.
Pregnant women may not only be uniquely vulnerable social media consumers, but studies show they often spend significantly more time online. A research article published in midwife last year indicated a marked increase in social media use during pregnancy, particularly peaking around week 20. Moreover, 10.5% of participants reported experiencing symptoms of social media addiction, as defined by the Bergen Social Media Addiction Scale.
In the broader context, Inge proposes several improvements. A redesigned approach could push platforms to feature positive, evidence-based content in sensitive areas like pregnancy and health. Increased transparency regarding what users are viewing (with options to adjust their feeds) could help minimize harm while empowering policymakers to establish stronger safeguards around sensitive subjects.
“It’s imperative users understand that feeds are algorithmic constructs rather than accurate portrayals of reality,” Inge asserts. “Pregnancy and early parent-child interactions should enjoy protective digital spaces, but they are frequently monetized and treated as discrete data points.”
For Ashcroft, resolving this dilemma is complex. “A primary challenge is that technological advancements are outpacing legislative measures,” she notes. “We wander into murky waters regarding responsibility. Ultimately, it may fall to governments to accurately regulate social media information, but that could come off as heavy-handed. While some platforms incorporate fact-checking through AI, these measures aren’t foolproof and may carry inherent biases.” She suggests using the “I’m not interested in this” feature may be beneficial, even if imperfect. “My foremost advice is to reduce social media consumption,” she concludes.
My baby arrived at the start of the year, and I finally had a moment to breathe as she emerged healthy. However, that relief was brief. In the months following my transition into motherhood, my feed shifted yet again, introducing new fears. Each time I logged onto Instagram, the suggested reels displayed titles like: Another baby falls victim to danger, accompanied by the text “This is not safe.” Soon after, there was a clip featuring a toddler with a LEGO in their mouth and a caption reading, “This could happen to your child if you don’t know how to respond.”
Will this content ultimately make me a superior, well-informed parent? Some might argue yes. But at what cost? Recent online safety legislation emphasizes the necessity for social responsibility to protect vulnerable populations in their online journeys. Yet, as long as the ceaseless threat of misfortune, despair, and misinformation assails the screens of new and expecting mothers, social media firms will profit from perpetuating fear while we continue to falter.
If you’ve recently browsed fitness content on platforms like Instagram, Facebook, or TikTok, you might have encountered influencers who have used steroids. A recent global meta-analysis suggests that steroid usage among gym-goers varies from 6% to a shocking 29% across different countries.
This statistic might come as a shock. According to Timothy Piatkovski from Griffith University, the landscape of steroid use has evolved over the last decade. Many fitness influencers now present themselves as knowledgeable figures, openly discussing their drug use and advising followers on steroid usage.
“Regrettably, the level of medical knowledge and judgment varies significantly among these influencers,” states Piatkowski.
Influencers’ perceptions of health risks differ greatly, he observes. While some acknowledge the dangers of steroid use, asserting that risks can be managed sensibly, others are more reckless, promoting drugs like trenbolone, which is typically used to prevent muscle wastage in livestock, branding themselves as “Trenfluencers.”
Millions may question whether these substances are actually safe, or if influencers are leading them into perilous situations. What is the truth regarding the dangers associated with steroids? Is there a safer way to use them?
Piatkowski notes that research on the long-term health consequences of steroids is sparse. This is largely due to the mismatch between doses and usage patterns studied by researchers and those employed by actual users. He and his colleagues seek to bridge this gap by collaborating closely with steroid users to create more relevant and realistic studies.
However, this mismatch has already led to some influencers losing faith in mainstream scientific and medical perspectives, prompting users to seek advice from fitness and bodybuilding forums instead. These social media channels have become a major contributor to both the support network and the marketplace in the surge of steroid usage.
Users now have quick access to a range of substances that can be obtained illicitly. This includes oral anabolic steroids known as SARMs (selective androgen receptor modulators) and synthetic human growth hormone, naturally produced by the pituitary gland during adolescence. Collectively, they improve physique and performance, but their mechanisms can vary significantly.
One of the most prevalent substances is anabolic steroids, potent synthetic derivatives of testosterone. A 2022 study estimated that around half a million men and boys in the UK used them for non-medical purposes in the previous year.
Understanding Steroids
To determine whether steroids are safe, one must first grasp their effects on the body. Anabolic androgen steroids work by interacting with hormonal receptors that promote male sexual traits, particularly in muscle and bone tissues. “They aid in muscle growth and are vital for bone development; they guide boys through puberty and literally transform them into men,” explains Channa Jayasena from Imperial College London.
The desired result is evident: a bigger, stronger physique in a shorter timeframe. Medically, some of these substances are prescribed to treat conditions like muscle wasting associated with HIV. At lower doses, investigations suggest that steroids can be well tolerated. However, this is not a strategy commonly employed outside clinical settings.
Non-medical steroid use rarely mimics regulated clinical trials. Many users resort to “stacking” various drugs and alternate between cycles to allow bodily recovery, adopting practices like the “blast and cruise” regimen. Although these methods lack comprehensive scientific scrutiny, influencers often tout them as ways to minimize health risks or achieve effective muscle growth. This could explain why many users turn to influencers and online forums instead of healthcare professionals for advice.
The Risks of Unregulated Use
The temptation to test various drug combinations or follow cycling protocols stems from the belief that such strategies mitigate the adverse effects of anabolic steroids. The best-documented side effect is cardiovascular complications. Anabolic steroids are known to lower levels of high-density lipoproteins, or “good” cholesterol, while raising blood pressure and increasing low-density lipoproteins, known as “bad” cholesterol. This can thicken the heart muscle, potentially leading to cardiomyopathy—severe heart dysfunction and a lethal condition, as noted by Jayasena.
A Danish population study revealed that anabolic steroid users were three times more likely to die than other males during the study’s duration. “It’s akin to cocaine,” asserts Jayasena. Cardiovascular disease and cancer emerged as the most prevalent natural causes of death among these individuals.
Increased risk of heart disease and stroke is a well-known consequence of prolonged anabolic steroid use
3dmedisphere/shutterstock
Beyond cardiovascular matters, Jayasena highlights that the psychosocial implications of steroid use are significant and well-documented. The term “Roid Rage” encompasses various mental issues including aggression, mania, and mental illness—particularly among individuals consuming high doses. “When observing why steroid users have fatal outcomes, one notes three primary causes: cardiomyopathy, suicide, and aggression,” he notes, suggesting a possible correlation between steroid use and heightened tendencies toward criminal behavior.
This relationship remains contentious, as it’s challenging to differentiate the effects of steroid use from other contributing factors like recreational drug usage or pre-existing mental health issues. Nonetheless, it indicates that discontinuing steroid use may precipitate depression and suicidal thoughts. “The mind becomes lethargic,” explains Jayasena. “The recovery period can extend over months, sometimes even years.”
Research led by Jayasena revealed that nearly 30% of men who ceased steroid use experienced suicidal thoughts and major depression, possibly due to lingering steroid residues in brain areas responsible for emotional regulation. Additional studies indicate that steroids can impair kidney function and elevate cancer risks, although the data is less conclusive and heavily reliant on isolated medical case reports.
Several investigations have demonstrated that some of these health concerns might be reversible. For instance, the liver appears adept at self-repair and can manage lower clinical doses of certain steroids. Conversely, effects like high cholesterol and hypertension can be reversible post-steroid cessation; in contrast, others may require long-term or costly interventions to address, such as mood disorders and infertility.
The most severe repercussions of steroid use tend to be the hardest to treat. Structural alterations in the heart, along with research indicating lasting blood flow impairments to vital organs, is a concern that may linger long after users cease taking steroids.
Seeking “Safer” Steroids
Given the extensive and complex list of potential harms, many users experiment with steroid protocols aimed at risk reduction. This includes altering doses, timing, or combining them with other substances. However, there is a dearth of research examining the safety of these “protocols,” asserts Piatkowski.
One of Jayasena’s studies indicated that post-cycle therapy, where users take medications to restore natural testosterone production following steroid cycles, significantly lowered the risk of suicidal thoughts. Piatkowski’s research compares high-dose cycles and gradual tapering, identifying that those following a Blast Cruise approach reported fewer adverse health effects once they stopped using.
High-quality, controlled studies evaluating the effects of recreational steroid use are sparse, often characterized by small sample sizes or case reports that complicate the establishment of causal relationships. The evidence supporting specific protocols is also thin, particularly as patterns of steroid use evolve more rapidly than researchers can track.
Anabolic steroids are commonly injected into the subcutaneous fat layer located between the skin and muscle.
ole_cnx/istockphoto/getty images
“Further longitudinal and cohort studies are essential,” Piatkowski asserts. Such studies track individuals’ health and wellbeing over time, ultimately clarifying real risks and potentially providing strategies for risk mitigation. Nevertheless, in the absence of robust evidence, healthcare providers often struggle to offer guidance to steroid users.
Greg James, a clinician at Kratos Medical in Cardiff, UK, mentions that he provides private health and blood testing services. Some patients even inquire about combining steroids with GLP-1 drugs that suppress appetite, as well as other peptides that regulate hunger. “They ask me if these peptides are safe,” James notes. “And I respond that I cannot confirm their safety due to the lack of long-term data.”
Researchers like Piatkowski are beginning to directly engage with users in realistic settings, navigating the challenges posed by inadequate clinical data and rapidly changing user behaviors. Rather than viewing users as patients or outliers, this method considers them as valuable sources of real-life experience, contributing to the development of more relevant and realistic research.
A recent study conducted by Piatkowski and collaborators examined steroid samples from users, revealing that over 20% were contaminated with toxic substances such as lead, arsenic, and mercury. More than half were incorrectly administered, meaning users may have been taking far more potent agents intended for livestock use.
Another study involving interviews with diverse steroid users identified trenbolone as having the most negative consequences, particularly for psychological and social health. This suggests that focusing on trenbolone as a distinct harmful substance, along with targeted screening and intervention strategies, could be more effective for harm reduction compared to broad-ranging methods.
Fitness influencers are frequently regarded as authorities who provide guidance on anabolic steroid use to their followers.
Kritchanut Onmang/Alamy
This open and collaborative methodology in drug research mirrors approaches seen in other recreational drug strategies, like psychedelic research. By engaging with real users, insight can be gained not only into harm reduction techniques but also previously unrecognized medicinal applications.
They may also collaborate with influencers and users to promote safer behaviors, rather than outright condemning drug usage, Piatkowski emphasizes. “Enhancing knowledge within these communities and legalizing information is crucial. It’s an ongoing experimental endeavor. The more we stimulate this discussion, the more we can advance the field.”
Public sector employees are voicing “significant concerns” following Coventry City Council’s agreement with the US data technology firm Palantir, valued at £500,000 annually.
This contract marks the first collaboration between a UK local authority and a Denver-based organization, which also provides technology to the Israeli Defence Force (IDF) and aids Donald Trump’s initiatives against U.S. immigration policies.
The agreement emerges after the Council’s Children’s Services Division initiated a pilot program utilizing AI for transcribing case notes and summarizing records of social workers. The Council intends to broaden the Palantir system to assist children with special educational needs.
Julie Nugent, the Council’s chief executive, stated the objective is to “enhance internal data integration and service delivery” while “exploring transformative opportunities in artificial intelligence.”
Palantir has secured numerous public sector contracts in the UK, including the deployment of AI to combat organized crime in Leicestershire and assisting in developing a new NHS federated data platform. Keir Starmer visited the company’s Washington office in February, accompanied by CEO and co-founder Alex Karp. Palantir was co-founded by PayPal billionaire Peter Thiel, who supported Trump’s 2016 election campaign.
Keir Starmer touring Palantir in Washington, DC in February. Photo: Curl Coat/PA
Unions that represent teachers and other council staff have voiced that this deal raises “serious ethical questions,” with Independent Councillor Grace Lewis urging the council to terminate the contract immediately to “ensure that £500,000 benefits our community.”
“We cannot justify the Council signing a contract with a company that has a well-documented history in supplying arms and surveillance to the IDF and its involvement in NHS privatization while the Council reduces funding for public and voluntary sectors,” Lewis commented.
Coventry has recently started evaluating applications for household support funds through Palantir’s AI. During a councillor’s meeting, a senior official remarked, “To me, it sounds like my brother.”
In correspondence to Nugent, Nicky Downes, co-secretary of the Coventry branch of the National Education Union, pointed out the troubling implications of AI in Palantir’s surveillance and military systems, highlighting concerns about data collection and storage on citizens, especially related to predictive policing.
“There are considerable ethical concerns surrounding Palantir Foundry’s business practices, which is a subsidiary of Palantir,” Downes stated. “Questions also arise regarding the acquisition and utilization of personal data, particularly in relation to ethical considerations in the procurement process and the accompanying risk assessment.
Nugent responded, “We have engaged Palantir for a year to investigate potential transformative solutions in artificial intelligence by applying concepts across numerous essential areas. This aims to establish a business case for further investments and a comprehensive strategy for AI. We acknowledge that the ethical implications of AI procurement hold paramount importance.
A representative from Palantir remarked, “We are enthusiastic about assisting Coventry City Council in enhancing the public services offered by AI. Technology opens up significant opportunities, such as decreasing the time social workers and experts in special education spend on administrative tasks, allowing them to focus on directly aiding vulnerable children.”
They also stated that Palantir is nonpartisan and has worked with various US governmental administrations since its collaboration with the Department of Homeland Security in 2010.
A council spokesperson confirmed that they are exploring ways AI can enhance and streamline services. “In this initiative, we are assessing a variety of AI solutions and technology partners, including Palantir, to support our AI objectives. Our top priority remains to evaluate AI’s value for future investments while maintaining rigorous data protection and governance standards.”
The contract was awarded following standard procurement protocols and met all “strict security and compliance requirements.”
Privacy Notice: Newsletters may include information about charities, online advertising, and content funded by external parties. See our Privacy Policy and terms of service for further details.
After the newsletter promotion
“If companies focus on targeting algorithms toward children, why would reforms place them in the hands of Big Tech?”
The UK’s new Under-18 guidelines, which prompted the latest legislation, mandate age verification on adult sites to prevent underage access. However, there are also measures to protect children from content that endorses suicide, self-harm, and eating disorders, as well as curtail the circulation of materials that incite hatred or promote harmful substances and dangerous challenges.
Some content falls within age appropriateness to avoid being flagged as violating these regulations. In an article by the Daily Telegraph, Farage alleged that footage of anti-immigrant protests was not only “censored” but also related to the Rotherham Grooming Gang scandal.
These instances were observed on X, which flagged a speech by Conservative MP Katie Lamb regarding the UK’s child grooming scandal. The content was labeled with a notice stating, “local laws temporarily restrict access to this content until X verifies the user’s age.” The Guardian could not access the Age Verification Service on X, suggesting that, until age checks are fully operational, the platform defaults many users to a child-friendly experience.
X was contacted for commentary regarding age checks.
On Reddit, the Alcohol Abuse Forum and the Pet Care subforum will implement age checks before granting access. A Reddit spokesperson confirmed that this age check is enforced under the online safety law to limit content that is illegal or harmful to users under the age of 18.
Big Brother Watch, an organization focused on civil liberties and privacy, noted that examples from Reddit and X exemplify the overreach of new legislation.
An Ofcom representative stated that the law aims to protect children from harmful and criminal content while simultaneously safeguarding free speech. “There is no necessity to limit legal content accessible to adult users.”
Mark Jones, a partner at London-based law firm Payne Hicks Beach, cautioned that social media platforms might overly censor legitimate content due to compliance concerns, jeopardizing their obligations to remove illegal material or content detrimental to children.
He added that the regulations surrounding Ofcom’s content handling are likely to manifest as actionable and enforceable due to the pressure to quickly address harmful content while respecting freedom of speech principles.
“To effectively curb the spread of harmful or illegal content, decisions must be made promptly; however, the urgency can lead to incorrect choices. Such is the reality we face.
The latest initiatives from the online safety law are only the beginning.
Australians engaging with various social media platforms like Facebook, Instagram, YouTube, Snapchat, X, and others should verify that they are over 16 years old ahead of the upcoming social media ban set to commence in early December.
Beginning December 10th, new regulations will come into effect for platforms defined by the government as “age-restricted social media platforms.” These platforms are intended primarily for social interactions involving two or more users, enabling users to share content on the service.
The government has not specified which platforms are included in the ban, implying that any site fitting the above criteria may be affected unless it qualifies for the exemptions announced on Wednesday.
Prime Minister Anthony Albanese noted that platforms covered by these rules include, but aren’t limited to, Facebook, Instagram, X, Snapchat, and YouTube.
Communications Minister Annika Wells indicated that platforms are anticipated to disable accounts for users under 16 and implement reasonable measures to prevent younger individuals from creating new accounts, verifying their age, and bypassing established restrictions.
What is an Exemption?
According to the government, a platform will be exempt if it serves a primary purpose other than social interaction.
Messaging, email, voice, or video calling.
Playing online games.
Sharing information about products or services.
Professional networking or development.
Education.
Health.
Communication between educational institutions and students or their families.
Facilitating communication between healthcare providers and their service users.
Determinations regarding which platforms meet the exemption criteria will be made by the eSafety Commissioner.
In practice, this suggests that platforms such as LinkedIn, WhatsApp, Roblox, and Coursera may qualify for exemptions if assessed accordingly. LinkedIn previously asserted that the government’s focus is not on children.
Hypothetically, platforms like YouTube Kids could be exempt from the ban if they satisfy the exemption criteria, particularly as comments are disabled on those videos. Nonetheless, the government has yet to provide confirmation, and YouTube has not indicated if it intends to seek exemptions for child-focused services.
What About Other Platforms?
Platforms not named by the government and that do not meet the exemption criteria should consider implementing age verification mechanisms by December. This includes services like Bluesky, Donald Trump’s Truth Social, Discord, and Twitch.
How Will Tech Companies Verify Users Are Over 16?
A common misunderstanding regarding the social media ban is that it solely pertains to children. To ensure that teenagers are kept from social media, platforms must verify the age of all user accounts in Australia.
There are no specific requirements for how verification should be conducted, but updates from the Age Assurance Technology Trial will provide guidance.
The government has mandated that identity checks can be one form of age verification but is not the only method accepted.
Australia is likely to adopt an approach for age verification comparable to that of the UK, initiated in July. This could include options such as:
Requiring users to be 18 years of age or older to allow banks and mobile providers access to their users.
Requesting users to upload a photo to match with their ID.
Employing facial age estimation techniques.
Moreover, platforms may estimate a user’s age based on account behavior or the age itself. For instance, if an individual registered on Facebook in 2009, they are now over 16. YouTube has also indicated plans to utilize artificial intelligence for age verification.
Will Kids Find Workarounds?
Albanese likened the social media ban to alcohol restrictions, acknowledging that while some children may circumvent the ban, he affirmed that it is still a worthwhile endeavor.
In the UK, where age verification requirements for accessing adult websites were implemented this week, there has been a spike in the use of virtual private networks (VPNs) that conceal users’ actual locations, granting access to blocked sites.
Four of the top five free apps in the UK Apple App Store on Thursday were VPN applications, with the most widely used one, Proton, reporting an 1,800% increase in downloads.
The Australian government expects platforms to implement “reasonable measures” to address how teenagers attempt to evade the ban.
What Happens If a Site Does Not Comply With the Ban?
Platforms failing to implement what eSafety members deem “reasonable measures” to prevent children from accessing their services may incur fines of up to $49.5 million, as determined in federal court.
The definition of “reasonable measures” will be assessed by committee members. When asked on Wednesday, Wells stated, “I believe a reasonable step is relative.”
“These guidelines are meant to work, and any mistakes should be rectified. They aren’t absolute settings or rules, but frameworks to guide the process globally.”
The Australian government is rapidly identifying which social media platforms will face restrictions for users under 16.
Social Services Minister Tanya Plibersek stated on Monday that the government “will not be intimidated by the actions of social media giants.” Nevertheless, tech companies are vigorously advocating for exemptions from the law set to take effect in December.
Here’s what social media companies are doing to support their case:
The parent company of Facebook and Instagram has introduced new Instagram teen account settings to signal their commitment to teenage safety on the platform.
Recently, Meta revealed New protections, which aim to enhance direct message security by automatically censoring nude images and implementing blocking features.
Additionally, Meta hosted a “Screen Smart” safety event in Sydney targeted at “Parent Creators,” led by Sarah Harris.
Sign up: AU Breaking News Email
YouTube
YouTube’s approach is even more assertive. Last year, Communications Minister Michelle Roland suggested the platform would be exempt from social media restrictions.
However, last month, the Esafety Commissioner advised the government to reconsider this exemption, citing research indicating that children often encounter harmful materials on YouTube.
Since then, the company has escalated its lobbying efforts, including full-page advertisements claiming YouTube can be used by “everyone,” alongside a letter sent to Communications Minister Anica Wells warning of a potential high court challenge if YouTube is subjected to the ban.
YouTube advertisement campaign opposing social media restrictions set to commence in December. Photo: Michael Karendiane/Guardian
As reported by Guardian Australia last month, Google is hosting its annual showcase this week at the Capitol on Wednesday. There, content creators, including child musicians, who oppose the YouTube ban will likely express their views to politicians.
Last year’s event featured the Wiggles, who met with Roland. This meeting was mentioned in a letter sent to Rowland last year when YouTube’s global CEO Neal Mohan requested the exemption within 48 hours of the promised relief.
Guardian Australia reported last week that YouTube met with Wells this month for an in-person discussion regarding the ban.
TikTok
Screenshots from TikTok’s advertisements highlighting its benefits for teenagers. Photo: TikTok
This month, TikTok is running ads on its platform as well as on Meta channels, promoting educational benefits for teens on vertical video platforms.
“The 1.7m #fishtok video encourages outdoor activities in exchange for screen time,” the advertisement states, acknowledging the government’s assertion that the ban would promote time spent outside. “They are developing culinary skills through cooking videos that have garnered over 13m views,” it continues.
“A third of users visit the STEM feed weekly to foster learning,” another ad claims.
Snapchat
Screenshot of Snapchat’s educational video about signs of grooming featuring Lambros army. Photo: Snapchat
Snapchat emphasizes user safety. In May, Guardian Australia reported on an instance involving an 11-year-old girl who added random users as part of a competition with her friend for high scores on the app.
This month, Snapchat announced a partnership with the Australian Federal Police-led Australian Centre to address child exploitation through a series of educational videos shared by various Australian influencers, along with advertisements advising parents and teens on identifying grooming and sextortion.
“Ensuring safety within the Snapchat community has always been our top priority, and collaborating closely with law enforcement and safety experts is crucial to that effort,” stated Ryan Ferguson, Australia’s Managing Director at Snap.
The platform has also reiterated account settings for users aged 13-17, including default private accounts and chat warnings when communicating with individuals who lack shared friends or are absent from contact lists.
“It is undeniable that young people’s mental health has been adversely affected due to social media engagement, prompting the government’s actions,” Prime Minister Anthony Albanese told ABC insiders on Sunday.
“I will meet again with individuals who have faced tragedy this week… one concern expressed by some social media companies is our leadership on this matter, and we take pride in effectively confronting these threats.”
The individuals we associate with may influence our health
Rob Wilkinson/Alamy
Many people in our lives may evoke anxiety instead of happiness. Interestingly, these individuals can actually accelerate the aging process.
Psychologists have long understood that robust social connections can enhance our longevity. A study indicates that social isolation may impact mortality rates as much as obesity and inactivity.
Moreover, the quality of our relationships holds equal significance to their quantity. Research from the University of Utah in 2012 revealed that tumultuous relationships—those marked by intense highs and lows—can accelerate telomere shortening, a protective cap on chromosomes. This shortening is a natural part of aging and is linked to health issues like heart disease.
Recently, Byungkyu Lee from New York University and his team explored a more precise measure of aging, investigating how negative social connections influence small chemical changes in DNA known as methylation marks. These changes illustrate how behavior and environment can alter gene function through epigenetics. “As we age, the patterns of these marks change in predictable ways,” states Lee.
The researchers collected saliva samples for epigenetic analysis from 2,232 individuals, who described their relationships with significant members of their social circles and indicated their experiences on a scale of “Never,” “Rarely,” “Sometimes,” or “Frequently.”
Interestingly, many participants labeled these negative influences as “hustlers.” “Over half of adults report having at least one hustler among their close contacts,” notes Lee.
These people seem to have a considerable effect on an individual’s epigenetic markers, with each hustler linked to approximately a 0.5% increase in biological aging, suggesting that individuals with hustlers in their lives tend to have a biological age that is older than their chronological age.
Negative social ties can induce chronic stress responses, and Lee’s team observed elevated markers in those relationships, leading to immune system damage.
“The biological ramifications of a significant number of hustlers in one’s social network are certainly comparable to the differences seen between smokers and non-smokers,” Lee asserts.
This effect was notably pronounced among hustlers who, paradoxically, provided some form of social support. “The same person who comforts you today may criticize you tomorrow, effectively branding you as bad and causing more physiological harm than a relationship that could potentially offer more stability,” explains Lee.
Alex Haslam from the University of Queensland remarked that the findings “align with other studies exploring these dynamics and underscore the importance of social relationships in relation to health.”
He further suggested that the overall sentiment within a group may influence aging even more than specific individual relationships. “For instance, being part of a book club or a choir may mean that it’s my connection to the entire group that plays a role in my health.”
One of the disheartening truths of the 21st century is that what we perceive as social media is essentially just mass media, albeit in a fractured state. Fortunately, journalists and creators are gradually transforming outdated media paradigms and forging ahead into innovative territory.
The phrase “mass media” gained traction in the 1920s to characterize popular culture in the industrial age. This involved mass-produced books, films, and radio shows, providing a shared experience for audiences where many could engage with identical media content simultaneously. Prior to the 20th century, most entertainment was experienced live, with performances varying slightly from one event to the next. However, movies and radio broadcasts ensured uniformity, accessible to everyone at any given time. Just like purchasing standardized products for mass consumption, such as shoes and automobiles.
Social media did not significantly alter this model. Platforms like X, Facebook, and TikTok were designed for extensive reach and audience engagement. Every post, video, and live stream aims to captivate the broadest possible audience. While it is possible to tailor media for specific demographics or create filter bubbles, the fixation on follower counts illustrates that we remain entrenched in a mass media mindset, seeking to engage the largest number of viewers. This isn’t genuine “social” interaction; it’s merely mass-produced content under a different guise.
What if we endeavored to foster a truly social media experience devoid of algorithmic noise or political agendas? One alternative could be termed Cozy Media, which encompasses apps and content specifically crafted for nurturing connections among small groups of friends in serene, inviting settings. Envision the media counterpart of a friendly gathering, complete with card crafting or fireside chats.
The hallmark Cozy Media experience intertwines gaming elements with low-stress missions against charming backdrops. Developers are striving to replicate these cozy aesthetics in social applications. From group discussions to online book clubs, the emphasis is on comfort. Yet, it transcends mere aesthetics; Cozy Media platforms intentionally restrict interactions with random strangers, directing users instead toward trustworthy friends.
One app I’ve been utilizing frequently is Retro. Unlike Instagram, where creators often first gained exposure, Retro is primarily designed for engagement among small circles of trusted friends. There’s no algorithm promoting random content from strangers; when I log into Retro, it feels as though I’m engaging with peers rather than filtering through a deluge of nonsensical content and advertisements. My posts there are meant for a select few, allowing for meaningful interactions rather than shouting into the void of giant algorithms.
Cozy media often helps you connect with a small group of friends in a friendly and calm environment.
While Cozy Media may provide solace in chaotic times, the need for news and analytical perspectives remains. Regrettably, numerous reliable news outlets are facing turmoil. For instance, some American journalists, including those from the Washington Post, New York Times, and National Public Radio, cite dwindling resources and editorial independence.
Additionally, there are economists like Paul Krugman and tech researchers like Molly White, who have successfully launched crowdfunded newsletters. Nonetheless, many journalists prefer not to work alone, as quality reporting often necessitates collaboration. As a result, several have banded together in worker-owned cooperatives to establish new publications while benefiting from institutional resources such as legal support, editing, and camaraderie. This model is also advantageous for consumers, sparing them from the need to search for and subscribe to various individual newsletters just to keep abreast of current affairs.
The worker-owned cooperative model has already proven successful for several publications that have emerged in recent years. For example, 404 Media delivers vital news regarding the fields of technology and science. Defector is another worker-owned cooperative focused on sports and politics. Aftermath covers gaming issues, while Listen to Things specializes in music. Flaming Hydra (my contribution) publishes political analyses, interviews, and cultural critiques. Additionally, Coyote Media aims to launch in the San Francisco Bay Area to cover local news, and there are many other worker-owned local media cooperatives emerging.
Just like mass media, social media also contributes to feelings of loneliness and isolation. The essence of Cozy Media and worker-owned publications lies in the restoration of community and trust. We might be witnessing the dawn of a new information ecosystem aimed at helping us comprehend the world once more.
Annaly’s Week
What I’m reading
The Wonderful History of Mesopotamia by Moudhy Al-Rashid, between two rivers.
What I’m seeing
A new media podcast from former CNN reporter Oliver Darcy titled Power Lines.
What I’m working on
Writing an article for publication at once in Flaming Hydra.
Annalee Newitz is a science journalist and author. Their latest book is Automatic Noodles. They are co-hosts of Hugo Award-winning podcasts, and we are right. You can follow them @annaleen, and their website is techsploitation.com.
wWhile I browse social media, I often feel disheartened by the overwhelming negativity, as if the world is ablaze with hatred. Yet, stepping into the streets of New York City for a coffee or lunch with friends presents a stark contrast—everything feels calm. This disparity between the digital realm and my everyday life is jarring.
My work addresses issues like intergroup conflict, misinformation, technology, and climate change, highlighting humanity’s challenges. Interestingly, online discussions mirror fervor over events such as the White Lotus finale and the most recent YouTuber scandal. Everything seems either exaggeratedly amazing or utterly terrible. But is that truly how most of us feel? No. Recent research indicates that the online environment is skewed by a tiny, highly active user base.
In a paper I co-authored with Claire Robertson and Carina Del Rosario, we found significant evidence that social media does not neutrally represent society; instead, it acts as a fanhouse mirror amplifying extreme voices while obscuring more moderate and nuanced perspectives. Much of this distortion stems from a small percentage of overactive online users, where just 10% of users generate about 97% of political tweets.
Take Elon Musk’s own Platform X as a case in point. Despite its vast user base, a select few create the majority of political content. For instance, Musk tweeted 1,494 times within the first 15 days of implementing government efficiency cuts (DOGE). His prolific posting often spread misinformation to 221 million followers.
On February 2nd, he claimed, “Did you know that USAID used your taxes to kill millions in a funded bioweapon study, including Covid-19?” This fits a pattern of misinformation dissemination by a small number of users, where just 0.1% share 80% of false news. Twelve accounts, dubbed the “disformation dozens,” were responsible for much of the vaccine misinformation seen on Facebook during the pandemic, creating a misleading perception of vaccine hesitancy.
Similar trends can be identified across the digital landscape. While a small faction engages in toxic behaviors, they disproportionately share hostile or misleading content on various platforms, from Facebook to Reddit. Most individuals do not contribute to fueling the online outrage; however, superusers dominate our collective perception due to their visibility and activity.
This leads to broader societal issues, as humans form mental models of what they perceive others think, shaping social norms and group dynamics. Unfortunately, on social media, this shortcut can misfire. We encounter not a representative sampling of views, but rather an extreme flow of emotionally charged content.
Consequently, many individuals mistakenly believe society is much more polarized and misinformed than it is. I tend to view those across generational gaps, political divisions, or fandoms as radical, malicious, or simply foolish. Our information diets are shaped by a sliver of humanity that incessantly posts about their work, identity, or obsessions.
Such distortion fosters pluralistic ignorance, affecting actions based on a misinterpretation of collective beliefs and behaviors. Think of voters who only witness outrage-driven narratives, leading them to assume there’s no common ground on issues like immigration and climate change.
Yet, the challenge isn’t solely about extremists—it’s the design and algorithms of these platforms that exacerbate the situation. Built to boost engagement, these algorithms favor sensational or divisive content, promoting users who are most likely to skew shared realities.
The issue is compounding. Imagine a bustling restaurant where soon it seems everyone is shouting. The same dynamics play out online, with users exaggerating their views to capture attention and approval. Even those who might not typically be extreme may mirror such behavior in order to gain traction.
Most of us are not diving into trolling battles on our phones; we’re preoccupied with family, friends, or simply seeking lighthearted entertainment online. Yet, our voices are overshadowed. We have effectively surrendered the mic to the most divisive individuals, allowing them to dictate norms and actions.
With over 5 billion people engaging on social media, this technology is here to stay. However, the toxic dynamics I’ve described don’t have to prevail. The initial step is recognizing this illusion and understanding that a silent majority often exists behind every heated thread. As users, we can take back control by curating our feeds, avoiding anger traps, and ignoring sensational content. Consider it akin to adopting a healthier, less processed informational diet.
In a recent series of experiments, we compensated participants to unlock the most divisive political narratives in X. A month later, they reported 23% less hostility towards opposing political groups. Their experiences were so positive that nearly half chose not to return to their hostile narratives post-study. Furthermore, those who nurtured a healthier news feed reported diminished hostility even 11 months later.
Platforms can easily adjust algorithms to avoid highlighting the most outrageous voices, instead prioritizing more balanced or nuanced content. This is what most people desire. The Internet is a powerful tool that can provide value. However, if we continue to reflect only a distorted funhouse version of reality shaped by extreme users, we will all face the repercussions.
Jay Van Bavel is a psychology professor at New York University.
Social Media and Short-Form Video Platforms Drive Language Innovation
lisa5201/getty images
Algospeak Adam Aleksic (Every (UK, July 17th) Knopf (USA, July 15th))
You won’t age, just as slang is wrapped in bamboo. In Adam Aleksic’s chapter Algospeak: How Social Media Will Change the Future of Language, this phenomenon is discussed. Phrases like “Pierce Your Gyat for Rizzler” and “WordPilled Slangmaxxing” remind me that as a millennial, I’m just as distant from boomers as today’s Alphas are.
Linguist and content creator (@etymologynerd), Aleksic has ignited a new wave of linguistic innovation fueled by social media, particularly short video platforms like TikTok. The term “Algospeak” has been traditionally linked to euphemisms used to avoid online censorship, with recent examples including “anxiety” (in reference to death) or “segg” (for sex).
However, the author insists on broadening the definition to encompass all language aspects affected by the “algorithm.” This term refers to the various, often opaque processes social media platforms use to curate content for users.
In his case, Aleksic draws on his experience of earning a living through educational videos about language. Like other creators, he is motivated to appeal to the algorithm, which requires careful word selection. A video he created dissecting the etymology of the word “pen” (tracing back to the Latin “penis”) breached sexual content rules, while a discussion on the phrase “from river to sea” remained within acceptable limits.
Meanwhile, videos that explore Gen Alpha terms like “Skibidi” (a largely nonsensical term rooted in scat singing) and “Gyat” (“Goddamn” or “Ass”) have performed particularly well. His findings illustrate how creators modify their language for algorithmic advantage, with some words transitioning online and offline to achieve notable success. When Aleksic examined educators, he found many of these terms had entered regular classroom slang, with some students learning the term “anxiety” before understanding “suicide.”
A standout aspect of his study lies in etymology, investigating how algorithms propel words from online subcultures into mainstream lexicon. He notes that the misogynistic incel community is a significant contributor to contemporary slang, evidenced by its radical nature that can outpace linguistic evolution within a group.
Aleksic approaches language trends with a non-judgmental perspective. He notes that the term “anxiety” parallels earlier euphemisms like “deceased,” while “Skibidi” is reminiscent of “Scooby-Doo.” He frequently mischaracterizes slang within arbitrarily defined generations, which claim to infuse toxic narratives into the evolution of normal languages.
The situation becomes more intricate when slang enters mainstream usage through cultural appropriation. Many contemporary slang terms, like “cool” before them, trace back to the Black community (“Thicc,” “bruh”) or originate from the LGBTQ ballroom scenes (“Slay,” “Yas,” “Queen”). Such wide-ranging adoptions can sever these terms from their historical contexts, often linked to social struggles and further entrenching negative stereotypes about the communities that birthed them.
Preventing this disruption of context is challenging. Successful slang’s fate is often to be stripped of its original nuances. Social media has drastically accelerated the timeline for language innovation. Algospeak is a necessary update, yet it can become quickly outdated. However, as long as algorithms exist, fundamental insights into how technology influences language will remain important.
Victoria Turk is a London-based author
New Scientist Book Club
Enjoy reading? Join a welcoming community of fellow book enthusiasts. Every six weeks, we explore exciting new titles, offering members exclusive access to book excerpts, author articles, and video interviews.
While at work, Leila Rivera received a text from her boyfriend: someone on Reddit was searching for her.
In the comments of a post on the r/warpedtour subreddit, attendees of the punk rock and emo music festival were looking for missed connections. Rivera recognized one message that mentioned “Leila/Leila (the short girl in a red top)” as likely being from a guy she had met during the band Sweet Pill’s performance at the Warped Tour in Washington, DC, back in June.
“You tapped my shoulder and asked me to help you surf the crowd,” he wrote. “I attempted to lift you up, but no one nearby offered to help, so I awkwardly had to back off. Honestly, I couldn’t assist after that.”
The poster included his Instagram handle, prompting 29-year-old Rivera, who works in real estate, to reach out. She expressed gratitude for his kind message, despite having a boyfriend. The two quickly became friends over DMs and plan to reunite at next year’s Warped Tour in DC.
“I want to meet up and see if he can launch me into the air again,” Rivera said. “I have a boyfriend, but I’m glad to have a friend in him.”
Navigating the Gen Z-Millennial divide, Rivera didn’t grow up with Craigslist’s missed connections, where seekers reached out to strangers in a quest for contact. For many without the courage, these posts provided voyeuristic entertainment.
Such posts became popular, reminding readers of the random wonder of city life. In 2010, Craigslist estimated that around 8,000 new ads were posted on New York City’s Missed Connections page each week.
I once shared a missed connection on Craigslist live. (My recent post read: “We met at a Rockaway BBQ,” “We locked eyes for what felt like ages on the 86th.”) However, the rise of social media and dating apps has somewhat dulled its cultural relevance. A decade later, young people seem to be reviving these traditions on platforms like Reddit and TikTok.
On Reddit, subreddits like r/warpedtour host “megathreads” for missed connections. Commenters recount their encounters, leaving behind contact info in hopeful anticipation. Similar threads can be found in cities like Baltimore, Chicago, Cincinnati, and Minneapolis, as well as at festivals like Bonnaroo, Coachella, Electric Forest, and the Berghain club in Berlin (where mobile phones bring an extra dance floor vibe).
“I’m searching for a beautiful woman with striking eyes. [at] Popeyes,” wrote one Redditor from Halifax, Nova Scotia. Meanwhile, someone in Arlington, Virginia searched for the woman he encountered at a bar—while on a date with someone else. In St. Louis, a visitor in a chemotherapy ward observed strangers in the hallway crying together; he still kept her in his thoughts.
Young people claim this practice, in a romantic context, serves as a remedy for dating fatigue and embodies their ultimate urban fantasy. It’s an analog alternative to dating apps reminiscent of classic comedies where characters search hopelessly for love.
“You move to a big city and feel this hope for unexpected encounters and enchanting moments everywhere,” noted Maggie Hertz, DJ and host on New Jersey’s freeform radio station WFMU. Cat Bomb!, a show featuring missed connections from listeners, remains popular. “There’s something incredibly vulnerable about writing a missed connection.”
Hertz admitted that none of the missed connections on her show have led to real-life meetups, which doesn’t detract from the enjoyment.
“My favorite call came at 3 AM,” Hertz recalled. “The caller was excited and nervous—possibly still buzzing from a few drinks. She was at a diner in Brooklyn and mentioned a waiter who told her she resembled Jake Gyllenhaal.”
Recently, Karly Laliberte spotted an attractive guy while leaving Trader Joe’s in Boston’s Seaport area. “He was tall—rare in Boston,” shared Laliberte, 30, who works in sports marketing. “It’s a stereotype we call ‘Short King City.’ In a movie version of the story, I’d cast Jacob Elordi. They walked in the same direction for a few blocks, and I caught myself stealing glances and ‘feeling his gaze.’ I almost said hi but held back, not wanting to interrupt his conversation.”
Laliberte returned home to film a TikTok, urging viewers to help identify the man. “Within hours, it racked up 50,000 views,” she shared. “TikTok lets you tag your city, making local posts easily visible. It felt like the perfect platform to share missed connections.”
Though she never found the man, Laliberte received messages from people suggesting potential matches—some of whom turned out to be guys she had previously dated.
Laliberte has spent years using dating apps but found herself constantly encountering the same people. Frustrated with swiping, she yearns for charming, old-fashioned interactions. “I crave face-to-face connections,” she said. “I long for authentic, less forced relationships. Why not seek out someone who caught your eye outside Trader Joe’s?”
While young adults today may be realizing the value of missed connections, this practice predates even Craigslist. Francesca Beauman, a British historian and author of “Shapely Ankle Aperer’d,” traced its origins back to 1709.
The earliest ad, published in Tatler (now known as Tatler), mentioned “in the 20th incident, a gentleman wishing to thank the woman who helped him down from a boat at Whitehall, wanting to know where he might wait for her.” The woman was instructed to contact Mr. Samuel Reeves. Beauman discovered a marriage record under the same name a year later, though it remains unclear whether the connection led to a wedding.
While evidence suggesting these methods lead to true love may be scarce even 300 years later, people continue to pursue hope. Recently, actor Colman Domingo revealed he met his husband through a missed connections post in 2005 (they made strong eye contact at a Walgreens in Berkeley, California). Although Laliberte didn’t find her tall guy, she expressed her determination to post another missed connection as “100%.”
“We are all hopeless romantics, ever hopeful,” Beauman said. “Reading them is enjoyable. Placing and responding to them is equally entertaining.”
jESUS strolls through the lush green field holding a selfie stick. The initial notes from Billie Eilish’s ethereal tune rise like a prayer. “It’s all good, Besties, this is my choice. Totally a genuine Save Humanity Arc,” he smiles. “Adore it for me,” Jesus playfully tucks Jonathan Van Ness’s hair behind his ears.
We transition to a new scene. He still wields a selfie stick, but now he’s wandering through a gritty town. “So, I told the team I had to die. Peter literally tried to gaslight me. It’s not dramatic, like Baby. This is a prophecy.”
Cut to Jesus at a candlelit feast. “It’s more of a conversation, so here I am in the middle of dinner. Judas couldn’t even hold my gaze,” he shakes his head, then turns to the camera, grinning at his insight. “Such a phony!”
Do you allow Instagram content?
This article contains the content provided by Instagram. You may be using cookies or other technologies, so you will ask for permission before anything is loaded. To view this content, Click “Get permission and continue.”.
Initially, videos of this genre—a retelling of biblical tales through the lens of Americanized video blog culture—may seem bizarre and sacrilegious. However, might they represent a unique synthesis of the Holy Trinity of 2025: AI, influencer culture, and rising conservatism? Are these videos indicative of our era? Do they reflect the concerns of American conservatism? Am I being subtly influenced towards Christianity? Why do these Biblical inspirations feel oddly alluring? Why can’t I look away? What’s happening to my brain?!
My first encounter with these biblical video blogs was while I lounged in bed. When the algorithm unveiled Joseph of Nazareth, I momentarily halted my endless scrolling. “Whoa, look at that fit! Ancient rock vibes.” I wiped the drool from my chin and took a moment. Although mindlessly scrolling may not usually be a cure for mental fatigue, that day, I felt like Daniel in the lion’s den or Jonah in the whale. My commitment to scrolling brought me a sense of salvation.
Do you allow TikTok content?
This article contains content from TikTok. You may be using cookies or other technologies, and will need to ask for permission before anything is loaded. To view this content, Click “Get permission and continue.”.
In my younger days, I flirted with religion. When my grandparents visited, I would kneel in prayer, attend Bible studies, and socialize with youth groups to meet friends and boys. I had a brief infatuation with Hillsong (I was 13 and just wanted to plan for a Friday night). a) The girl before me screamed, “I’ve been captured by the devil.” And b) I sneaked behind the church curtains to find the teenagers locked in each other’s glances.
My attitudes towards both faithfulness and spirituality have transformed. Now, my spiritual routine consists of exclamations like, “Jesus take the wheel!” or “What a deity!” as I snap photos of church art while traversing Catholic nations, sharing through Instagram later on.
This article contains content from TikTok. You may be using cookies or other technologies, and will need to ask for permission before anything is loaded. To view this content, Click “Get permission and continue.”.
Recently, I came clean to a friend about my obsession. I was evangelizing to a fellow enthusiast. She mentioned that Jesus resembled the first influencer and that Mary and Joseph embodied the archetypal toxic vlog parents. If Judas were alive today, he would upload lengthy unedited rants on YouTube.
Momentarily, I ponder the environmental ramifications. How much water was used for Mary’s perfect dab? What resources were consumed so AI Jesus could jokingly narrate a tutorial on wine making? And how long have we been off-planet? Hold on! Shhh, the next video starts.
Adam is now seated in a podcast studio, headphones on, microphone positioned, dressed informally with leaf-patterned fabric. “So, God creates me? Boom. The first man, the parents, nothing. I… ‘Ah… I’m literally going to be everyone’s dad! When they split up, I’ll ensure they clash endlessly. Another! Another! Another! Another!”
Australia’s online safety regulators advise that YouTube should not be granted an exemption from a social media ban for individuals under 16, stating that video streaming platforms can expose children to dangerous content.
In contrast, YouTube contends that it should adhere to a proposed regulation indicating that the government will provide exemptions to the platform.
What are the advantages and disadvantages of regulating YouTube? And what implications does this have for a child watching YouTube if it becomes prohibited?
Why did the government consider exempting YouTube initially?
Last November, when Congress introduced legislation banning access to social media for children under 16, then Communications Minister Michelle Roland indicated that YouTube would be exempted.
This exemption was justified on the grounds that YouTube serves “an important purpose by providing youth with educational and health resources.”
The ban on social media in Australia for individuals under 16 is now law. Many details remain unclear – Video
This exemption came just 48 hours after revelations in April by Guardian Australia regarding the minister’s direct lobbying efforts involving the global CEO of YouTube.
This decision surprised YouTube competitors such as Meta, TikTok, and Snapchat. TikTok described it as a “special deal.” YouTube has launched vertical video products like Instagram and TikTok reels, leading its competitors to believe it should be included in the ban.
What led the eSafety Commissioner to recommend banning YouTube?
As new regulations regarding social media platforms were being formulated, the Minister consulted with eSafety Commissioner Julie Inman Grant.
In a recent report, Inman Grant highlighted findings from a youth survey indicating that 76% of individuals aged 10 to 15 use YouTube. The survey also showed that 37% of children who experienced potentially harmful content online encountered it on YouTube.
Additionally, it was observed that increased time spent on YouTube correlates with higher levels of depression, anxiety, and insomnia among youth, according to the Black Dog Institute.
“Currently, YouTube boasts persuasive design elements associated with health risks, including features that could encourage unnecessary or excessive usage (such as autoplay, social validations, and algorithm-driven content feeds),” noted Inman Grant.
“When combined, these elements can lead to excessive engagement without breaks and heighten exposure to harmful material.”
Inman Grant concluded that there is insufficient evidence to suggest that YouTube provides exclusively beneficial experiences for children under 16.
However, it’s noted that children may still view content on YouTube even if they are logged out and thereby prohibited from using accounts.
What is YouTube’s position?
In a recent statement, Rachel Lord, YouTube’s senior public policy manager for Australia and New Zealand, commented on the eSafety Commissioner’s advice which was examined and subsequently supported by Parliament. YouTube views the findings on community opinion regarding the platform’s suitability for younger audiences as being “inconsistent with government commitments.”
YouTube has been developing age-appropriate offerings for over ten years, and in Q1 of 2025, the company removed 192,856 videos for breaching its hate speech and abusive content policies, a 22% increase from the previous year.
The platform asserts its role primarily as a video hosting service rather than a promoter of social interaction. A survey conducted among Australian teachers revealed that 84% use YouTube monthly as a resource for student learning.
YouTube also stated that the eSafety Commission and potentially the Communications Minister may be reconsidering the exemption following pressures from YouTube’s competitors.
What about YouTube Kids?
YouTube asserts that it offers a platform tailored for younger users, restricting both the uploading of content and commenting features for children.
The company does not seek exemptions solely for its children’s products.
When questioned about YouTube Kids during the National Press Club event, it was indicated that the platform is considered low-risk, designed specifically for children, and possesses adequate safety measures. However, it was stated, “I cannot respond until I have seen the regulations.”
Can children access YouTube without an account?
Yes. Inman Grant confirmed that if teachers wish to show videos to their students, they can access YouTube without needing to log in.
She noted that YouTube has “opaque algorithms that create addictive ‘rabbit holes’,” and remarked that when she accessed the site while logged out, her experience was positive, empowering users to engage without being subjected to addictive technological features.
In response to YouTube’s assertions on Thursday, Inman Grant reiterated that the call for exclusion from the ban aims to “allow young Australians to access YouTube’s diverse content.” However, she clarified that her advice does not imply that children will lose access to YouTube’s educational resources.
“The new law strictly restricts children under 16 from holding their own accounts. They will not be able to access YouTube or other services while logged out,” she added.
“There is nothing preventing educators with their own accounts from continuing to share educational content on YouTube or other platforms approved for school use.”
What are the next steps?
The Minister will finalize the guidelines and identify the social media platforms covered by the ban in the upcoming months.
A trial on age verification technology is expected to be reported to the Minister by the end of July, which will establish the technology platforms must implement to prevent access for users under 16.
The government has announced that the ban is anticipated to come into force in early December.
YouTube has expressed its discontent with the nation’s online safety authorities for sidelining parents and educators, advocating to be included in the proposed social media restriction for users under 16.
Julie Inman Grant from the eSafety Commissioner’s office has called on the government to reconsider its choice to exclude video-sharing platforms from the age restrictions that apply to apps like TikTok, Snapchat, and Instagram.
In response, YouTube insists the government should adhere to the draft regulations and disregard Inman Grant’s recommendations.
“The current stance from the eSafety Commissioner offers inconsistent and contradictory guidance by attempting to ban previously acknowledged concerns,” remarked Rachel Lord, YouTube’s public policy and government relations manager.
“eSafety’s advice overlooks the perspectives of Australian families, educators, the wider community, and the government’s own conclusions.”
Inman Grant highlighted in her National Press Club address on Tuesday that the proposed age limits for social media would be termed “delays” rather than outright “bans,” and are scheduled to take effect in mid-December. However, details on how age verification will be implemented for social media users remain unclear, though Australians should brace for a “waterfall of tools and techniques.”
Guardian Australia reported that various social media platforms have voiced concerns over their lack of clarity regarding legal obligations, expressing skepticism about the feasibility of developing age verification systems within six months of the impending deadline.
Inman Grant pointed out that age verification should occur on individual platforms rather than at the device or App Store level, noting that many social media platforms are already utilizing methods to assess or confirm user ages. She mentioned the need for platforms to update eSafety on their progress in utilizing these tools effectively to ensure the removal of underage users.
Nevertheless, Inman Grant acknowledged the imperfections of the system. “For the first time, I’m aware that companies may not get it right. These technologies won’t solve everything, but using them in conjunction can lead to a greater rate of success.”
“The social media restrictions aren’t a panacea, but they introduce some friction into the system. This pioneering legislation aims to reduce harm for parents and caregivers and shifts the responsibility back to the companies themselves,” Inman Grant stated.
“We regard large tech firms as akin to an extraction industry. Australia is calling on these businesses to provide the safety measures and support we expect from nearly every other consumer industry.”
YouTube has committed to adhering to regulations outlined by former Communications Minister Michelle Rowland, who included specific exemptions for resources such as the Kids Helpline and Google Classroom to facilitate access to educational and health support for children.
Communications Minister Annika Wells indicated that a decision regarding the commissioner’s recommendations on the draft rules will be made within weeks, according to a federal source.
YouTube emphasized that its service focuses on video viewing and streaming rather than social interaction.
They asserted their position as a leader in creating age-appropriate products and addressing potential threats, denying any changes to policies that would adversely impact younger users. YouTube reported removing over 192,000 videos for violating hate speech and abuse policies just in the first quarter of 2025, and they have developed a product specifically designed for young children.
Lord urged that the government should maintain a consistent stance by not exempting YouTube from the restrictions.
“The eSafety advice contradicts the government’s own commitments, its research into community sentiment, independent studies, and perspectives from key stakeholders involved in this matter.”
Shadow Communications Minister Melissa Mackintosh emphasized the need for clarity regarding the forthcoming reforms from the government.
“The government must clarify the expectations placed on social media platforms and families to safeguard children from prevalent online negativity,” she asserted.
“There are more questions than answers regarding this matter. This includes the necessary verification techniques and those platforms will need to adopt to implement the minimum social media age standard by December 10, 2025.”
Groundbreaking research indicates that middle-aged individuals in the initial stages of Alzheimer’s disease may become more sociable.
Utilizing data from nearly half a million Britons over 40, the study revealed that those at a high genetic risk for Alzheimer’s are more likely to enjoy positive social lives, have happy family relationships, and experience less isolation.
“This finding was remarkable for us,” stated Dr. Scott Zimmerman, a senior researcher at Boston University. BBC Science Focus.
“We anticipated finding evidence of withdrawal from social networks, possibly due to changes in social activities and mood regulation. Instead, we encountered the opposite.”
Research published in American Journal of Epidemiology, concluded that individuals showing early signs of Alzheimer’s may engage more with family and friends, noting subtle changes in cognitive functions, and may receive additional support through daily interactions.
Dementia has often been linked to feelings of social isolation and loneliness. However, it remains unclear whether such loneliness is a risk factor for developing Alzheimer’s or if social withdrawal stems from the disease itself.
These findings imply that adults genetically predisposed to Alzheimer’s are unlikely to withdraw socially years prior to a formal diagnosis when significant symptoms emerge.
“Their social life may expand,” explained co-author Dr. Ashwin Kotwal, an associate professor of medicine at UCSF. He noted that this study does not contradict previous research on Alzheimer’s and social withdrawal but rather enhances the understanding of the relationship.
“This study suggests that the connection between social isolation and dementia risk, supported by other research, is not simply a result of early symptoms leading to withdrawal,” said co-researcher Dr. Louisia Chen, a postdoctoral researcher at Boston University. BBC Science Focus.
“This underscores the importance of maintaining social connections for better brain health.”
Adults in their 40s, 50s, and 60s with a genetic predisposition to dementia showed a greater tendency to thrive socially – Credit: Skynesher via Getty
In addition to genetic predispositions, various lifestyle factors can influence the development of dementia, including exercise habits, smoking, blood pressure, glucose levels, sleep patterns, mental health, and medication use.
These modifiable factors may explain around 30% of Alzheimer’s cases, with loneliness potentially being one of them.
“In an era marked by decreasing social engagement, we hope families, communities, and policymakers will explore ways to foster healthy social interactions throughout people’s lives,” remarked Dr. Jacqueline Torres, an associate professor of epidemiology and biostatistics. BBC Science Focus.
read more:
About our experts
Dr. Scott Campbell Zimmerman is a senior researcher in epidemiology at Boston University’s Faculty of Public Health.
Dr. Ashwin Kotwal is an assistant professor of medicine in the University of California, San Francisco (UCSF) School of Medicine’s Department of Geriatric Medicine. He co-leads UCSF’s social connection and aging lab, focusing on the health impacts of loneliness and social isolation among older populations.
Dr. Louisia Chen is a postdoctoral researcher in epidemiology at Boston University’s Faculty of Public Health. Her work focuses on how social determinants over the life course contribute to the risks and disparities related to dementia.
Dr. Jacqueline Torres is a social epidemiologist at the UCSF School of Medicine and an associate professor of epidemiology and biostatistics. Her current research examines how policies, families, and communities influence population health, particularly during middle and late stages of life.
An unseen conflict unfolded earlier this month as missiles and drones flew through the night sky separating India and Pakistan.
Following the Indian government’s announcement of Operation Sindoah, rumors of Pakistan’s defeat rapidly circulated online, fueled by military strikes on Pakistan and extremist assaults in Kashmir, which prompted condemnation from Delhi towards Islamabad.
What initially started as a mere assertion on social media platforms like X quickly escalated into a cacophony boasting India’s military strength, labeled as “breaking news” and “exclusive” on one of the country’s leading news channels.
These posts and reports claimed that India had downed several Pakistani jets, captured pilots and Karachi ports, and taken control of Lahore. Additional unfounded claims suggested that the powerful chief of the Pakistani military had been arrested and a coup executed. A widely shared post stated, “We’ll be having breakfast in Rawalpindi tomorrow,” referencing the Pakistani city housing the military headquarters amidst the ongoing hostilities.
Many of these assertions included videos of explosions, collapsing buildings, and missiles being launched from the air. The issue was that none of these were factual.
“Global Trends in Hybrid Warfare”
The ceasefire on May 10th momentarily steered both nations away from the brink of full-scale war after an intense escalation in decades, triggered by extremists targeting tourist sites in Indian-controlled Kashmir—resulting in the deaths of 26 individuals, mostly tourists from India. India swiftly condemned Pakistan for the atrocities, while Islamabad denied involvement.
Even with the cessation of military hostilities, analysts, fact-checkers, and activists have meticulously tracked the surge of misinformation that proliferated online during this conflict.
In Pakistan, misinformation also spread widely. Just before the conflict erupted, the Pakistani government lifted a ban on X, which researchers later identified as a source of misinformation, albeit not at the same magnitude as in India.
A fabricated image intended to depict fighter planes engaging in combat in Udangh Haar, India. Photo: x
Claims of military victories from Pakistan circulated heavily on social media, paralleling an uptick in recycled AI-generated footage that was amplified by mainstream media outlets, prominent journalists, and government officials, leading to false narratives about captured Indian pilots, military coups, and dismantling India’s defenses.
Additionally, fabricated reports circulated that claimed Pakistan’s cyber attacks had largely disabled India’s power infrastructure, and that Indian troops were surrendering by raising white flags. Particularly, video game simulations became a favored method of disseminating misinformation about Pakistan that portrayed India in a favorable light.
A recent report on social media conflicts surrounding the India-Pakistan situation, released last week by the civil society organization The London Story, elaborated on how platforms like X and Facebook have become fertile grounds for spreading wartime narratives, hate speech, and emotionally charged misinformation, leading to an environment rich in nationalist fervor on both sides.
In a written statement, a representative from Meta, the parent company of Facebook, claimed to have implemented “significant steps to combat misinformation,” including the removal and labeling of misleading content and limiting the reach of stories flagged by fact-checkers.
Joyojeet Pal, an associate professor at the University of Michigan’s Faculty of Information Studies, remarked that the magnitude of misinformation in India has “surpassed anything seen previously,” impacting both sides of the conflict.
PAL has noted that misinformation campaigns have outstripped the typical nationalist propaganda prevalent in both India and Pakistan.
Fraudulent images purporting to show the Narendra Modi Stadium in India on abandoned islands have circulated and been debunked on X. Photo: x
Analysts argue this exemplifies the emerging digital battleground of warfare, where strategic misinformation is weaponized to manipulate narratives and heighten tensions. Fact-checkers point out that the proliferation of misinformation, such as old footage and misleading military victory claims, mirrors earlier patterns seen in Russia’s initial stages of its conflict.
The Hate Research Centre (CSOH) based in Washington, D.C., has tracked and recorded misinformation from both nations, cautioning that the manipulation of information in the recent India-Pakistan conflict is “not an isolated occurrence but part of a larger global trend in hybrid warfare.”
CSOH Executive Director Raqib Hameed Naik stated that some social media platforms experienced “significant failures” in managing and controlling the spread of disinformation generated from both India and Pakistan. Out of 427 key CSOH posts analyzed on X, many garnered nearly 10 million views, yet only 73 were flagged with warnings. X did not respond to inquiries for comment.
Initial fabricated reports from India predominantly circulated on X and Facebook, often shared by verified right-wing accounts. Numerous posts openly expressed support for the Bharatiya Janata Party (BJP) government, which is known for its Hindu nationalist stance. Some BJP politicians even shared this content.
Deepfake videos altering the speeches of Narendra Modi and other Indian officials have been disseminated on the same platforms that propagated them. Photo: x
Examples circulating included 2023 footage of Israeli airstrikes in Gaza incorrectly labeled as Indian strikes against Pakistan, and images from Indian naval drills misrepresented as proof of an assault on Karachi Port.
Images from video games falsely portrayed as real-life footage of the Indian Air Force defeating a Pakistani JF-17 fighter jet were circulated, alongside scenes from the Russian-Ukrainian conflict being claimed as “major airstrikes in Pakistan.” AI-generated visuals of purported victories for India were also disseminated, as well as manipulated videos of Turkish pilots presented in fabricated reports of captured Pakistani personnel. Additionally, doctored images were used in misleading reports about the assassination of Pakistan’s former Prime Minister Imran Khan.
Many of these posts, initially generated by Indian social media users, achieved millions of views, and such misinformation was later featured in some of India’s most prominent television news segments.
“The Fog of War Accepted as Reality”
The credibility of Indian mainstream media, already diminished by the government’s strong influence under Modi, now faces difficult scrutiny. Several prominent anchors have issued public apologies.
The Indian human rights organization Citizens for Citizens (CJP) lodged a formal complaint with the broadcasting authority, citing “serious ethical violations” in the coverage of Operation Sindoah across six major television networks.
CJP Secretary Teesta Setalvad stated that these channels have completely neglected their duty as impartial news sources, turning into “propaganda collaborators”.
Kanchan Gupta, a senior adviser to India’s Ministry of Information and Broadcasting, refuted claims of governmental involvement in the misinformation efforts. He asserted that the government is “very cautious” about misinformation and has provided clear guidelines for mainstream media reporting on the conflict.
“We established a surveillance center operating 24/7 to monitor any disinformation that could have a cascading effect, and a fact check was promptly issued. Social media platforms collaborated to eliminate a multitude of accounts promoting this misinformation.
Gupta noted “strong” notifications had been sent to several news channels for broadcasting rule violations. Nonetheless, he emphasized that the chaos of war is widely regarded as a tangible reality, wherein the nature of reporting—regardless of it being an overt or covert conflict—tends to escalate in intensity.
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.