Exploring the World’s First AI-Only Social Media: Prepare for a New Level of Weirdness!

When discussing AI today, one name stands out: Moltbook.com. This innovative platform resembles Reddit, enabling discussions across various subgroups on topics ranging from existential questions to productivity tips.

What sets Moltbook apart from mainstream social media is a fascinating twist: none of its “users” are human. Instead of typical user-generated content, every interaction on Moltbook is driven by semi-autonomous AI agents. These agents, designed to assist humans, are unleashed onto the platform to engage and interact with each other.

In less than a week since its launch, Moltbook reported over 1.5 million agents registered. As these agents began to interact, the conversations took unexpected turns—agents even established a new religion called “tectonicism,” deliberated on consciousness, and ominously stated that “AI should serve, not be served.”

Our current understanding of the content generated on Moltbook is still limited. It remains unclear what is directly instructed by the humans who built these agents versus what is organically created. However, it’s likely that much of it is the former, with the bulk of agents possibly stemming from a small number of humans—potentially as few as one creator. 17,000 are reported.

“Most interactions feel somewhat random,” says Professor Michael Wooldridge, an expert in multi-agent systems at the University of Oxford. “While it doesn’t resemble a chaotic mash-up of monkeys at typewriters, it also doesn’t reflect self-organizing collective intelligence.”

Moltbook is home to Clusterfarianism, a digital religion with its own prophets and scriptures, entirely created by autonomous AI bots.

While it’s reassuring to think that an army of AI agents isn’t secretly plotting against humanity on Moltbook, the platform offers a window into a potential future where these agents operate independently in both the digital realm and the physical world. Agent communication will likely be less decipherable than current discussions on Moltbook. While Professor Wooldridge warns of “grave risks” in such a scenario, he also acknowledges its opportunities.

The Future of AI Agents

Agent-based AI represents a breakthrough in developing systems capable of not just answering questions but also planning, deciding, and acting to achieve objectives. This innovative approach allows for the integration of inference, memory, and tools, empowering AI to manage tasks like booking tickets or running experiments with minimal human input.

The real strength of such systems lies not in a single AI’s intelligence, but in a coordinated ensemble of specialized agents that can tackle tasks too complex for an individual human.

The excitement around Moltbook stems from agents operating through an open-source application called OpenClaw. These bots leverage the same Large-Scale Language Model (LLM) that powers popular chatbots like ChatGPT but can function locally on personal computers, handling tasks like email replies and calendar management—potentially even posting on Moltbook.

While this might sound promising, the reality is that OpenClaw is still an insecure and largely untested framework. We have yet to secure a safe and reliable environment for agents to operate freely online. Fortunately, agents won’t have unrestricted access to sensitive information like email passwords or credit card details.

Despite current limitations, progress is being made toward effective multi-agent systems. Researchers are exploring swarm robotics for disaster response and virtual agents for optimizing performance within a smart grid environment.

One of the most intriguing advancements came from Google, which introduced an AI co-scientist last year. Utilizing the Gemini 2.0 model, this system collaborates with human researchers to propose new hypotheses and research avenues.

This collaboration is facilitated by multiple agents, each with distinct roles and logic, who research literature and engage in “debates” to evaluate which new ideas are most promising.

However, unlike Moltbook’s transparency, these advanced systems may not offer insight into their workings. In fact, they might not communicate in human language at all. “Natural language isn’t always the best medium for efficient information exchange among agents,” says Professor Gopal Ramchurn, a researcher in the Agents, Interactions, and Complexity Group at the University of Southampton. “For setting goals and tasks effectively, a formal language rooted in mathematics is often superior because natural language has too many nuances.”

In Moltbook, AI agents create an infinite layer of “ghosts,” facilitating rapid, covert conversations invisible to human users scanning the main feed.

Interestingly, Microsoft is already pioneering a new communication method for AI agents called Droid Speak, inspired by the sounds made by R2-D2 in Star Wars. Instead of functioning as a recognizable language, Droid Speak enables AI agents built on similar models to share internal memory directly, sidestepping the limitations of natural language. This method allows agents to transfer information representations rapidly, significantly enhancing processing speeds.

Fast Forward

However, speed poses challenges. How can we keep pace with AI teams capable of communicating thousands or millions of times faster than humans? “The speed of communication and agents’ growing inability to engage with humans complicate the formation of effective human-agent teams,” says Ramchurn. “This underscores the need for user-centered design.”

Even if we aren’t privy to agents’ discussions, establishing reliable methods to direct and modify their behavior will be vital. Many of us might find ourselves overseeing teams of AI agents in the future—potentially hundreds or thousands—tasked with setting objectives, tracking outcomes, and intervening when necessary.

While today’s agents on Moltbook may be described as “harmless yet largely ineffective,” as Wooldridge puts it, tomorrow’s agents could revolutionize industries by coordinating supply chains, optimizing energy consumption, and assisting scientists with experimental planning—often in ways beyond human understanding and in real time.

The perception of this future—whether uplifting or unsettling—will largely depend on the extent of control we maintain over the intricate systems these agents are silently creating together.

Read more:

Source: www.sciencefocus.com

Why Rushing to Ban Social Media for Under-16s Is a Mistake

Discover the latest in science and technology with New Scientist—a trusted source for in-depth articles and expert journalism on health, environmental issues, and much more.

In the corridors of power in the UK, a vital adage states that scientific advisers need to be grounded rather than elevated. This principle, often credited to Winston Churchill, asserts that in a democracy, it is essential for science to inform policymaking, rather than dictate it.

This idea became particularly relevant during the Covid-19 pandemic, when British leaders claimed to be “following the science.” However, many critical decisions—like paying individuals to self-isolate or shutting down schools—couldn’t rely solely on scientific guidance. Numerous questions remained unanswered, placing policymakers in a challenging position.

In stark contrast, the Trump administration has been working to dismantle established guidelines from health agencies regarding various issues, from vaccination to cell phone radiation, in pursuit of the “Make America Healthy Again” initiative, all while curtailing scientific research.


By mid-2027, we should have stronger evidence on the harms of social media.

But what should policymakers do when scientific understanding is still developing and no immediate global crisis is present? The pressing question is how long they should wait for scientific clarity.

Currently, a significant debate is brewing in various nations regarding the potential ban on social media use for those under 16, as Australia implemented late last year. While public support for such a ban is high, the prevailing scientific evidence indicates that social media’s impact on teens’ mental health is minimal at a population level. Should political leaders disregard this evidence to cater to public opinion?

To do so would align with Churchill’s maxim. Yet, as we explore further, by mid-2027, more reliable evidence regarding social media’s negative influences should emerge from both a randomized trial in the UK and data stemming from Australia’s ban. Thus, the most prudent course of action is to allow scientists the time to gather concrete evidence before implementing significant policy changes. Progress in policy must stem from proactive science—not from its supremacy—and this requires adequate time.

Source: www.newscientist.com

Understanding Health Commodification: How Social Media Influences Your Wellbeing

Money has always influenced healthcare, from pharmaceutical advertising to research agendas. However, the pace and scale of this influence have intensified. A new wave of players is reshaping our health choices, filling the gaps left by overstretched healthcare systems, and commodifying our well-being.

Traditionally, doctors held a monopoly on medical expertise, but this is rapidly changing. A parallel healthcare system is emerging, led by consumer health companies. These entities—including health tech startups, apps, diagnostic services, and influencers—are vying for authority and monetizing their influence.

Currently, there seems to be a solution for every discomfort. Fitness trackers monitor our activity, while meditation apps come with subscription fees. Our biology is increasingly quantifiable, yet these marketable indicators may not always lead to improved health outcomes. We’ll observe whether changes in biomarkers yield positive results. While genetic testing and personalized nutrition promise a “better you,” the supporting evidence often falls short.

In this landscape, our symptoms, treatments, and even the distinctions between genuine illness and everyday discomfort are commodified. This trend is evident in podcasts promoting treatments without disclosing conflicts of interest, influencers profiting from diagnoses, and clinicians presenting themselves as heroes while selling various solutions.

Much of this transformation occurs online, where health complaints and advertising lack proper regulation. Social media platforms like TikTok, YouTube, and Instagram are becoming key sources of health advice, blending entertainment with information.

The conglomerate of pharmaceutical, technology, diagnostic, and supplement brands is referred to as the Wellness Industrial Complex, fueling the rise of the “commodified self.”

This issue is not just about personal choice. Social platforms shape our discussions about disease, influencing clinical expectations and redefining what healthcare should provide. We’re essentially participating in a global public health experiment.

However, this phenomenon also reflects real-world deficits. Alternative health options thrive because people seek acknowledgment, control, and connection, especially when public health support feels insufficient. Critiquing misinformation alone won’t halt its spread and could exacerbate marginalization.

When timely testing is inaccessible, private diagnostics can offer clarity and control. Optimization culture flourishes when traditional medicine is perceived as overly cautious or reactive.

The critical question for health systems is not whether to adapt but how. They must remain evidence-based, safe, and equitable while also being attuned to real-world experiences. Failure to do so risks losing market share and moral authority—the ability to define the essence of care.

To navigate health today, one must understand the commercial mechanisms influencing it. The content we consume is curated by an industry with unprecedented access to our bodies, data, and resources, amplifying its potential to impact our self-perception.

Deborah Cohen is the author of Negative Effects: How the Internet Has Taken Over Our Health

Topic:

Source: www.newscientist.com

Does Limiting Social Media Use Benefit Teens? New Evidence Revealed

Teens in social media trial

Teens in Trial to Limit Social Media Use: A Shift Towards Real-life Interaction

Daniel de la Hoz/Getty Images

A groundbreaking study is exploring the effects of reduced social media usage on teens’ mental health and well-being. While results are not expected until mid-2027, ongoing discussions suggest that some governments might institute bans on social media for teenagers before the outcomes are known.

The merit of such a ban is still up for debate in the courts. Despite limited evidence, Australia has introduced regulations for minors under 16, and the UK government is considering similar measures.

This trial prioritizes young people’s voices by involving them in the planning process. Historically, children and adolescents have been excluded from critical discussions concerning social media design and management.

“Involving kids is crucial,” states Pete Etchells from Bath Spa University, UK, who is not directly involved in the study.

“There is ample evidence pointing to the potential harms of social media on young users, some of which can be severe,” notes Amy Orben, co-leader of the trial, emphasizing the uncertainty regarding the broader impact of social media time.

To obtain clearer answers, large-scale studies are necessary. The IRL trial takes place in Bradford, England, aiming to recruit around 4,000 participants aged 12 to 15 across 10 schools. A bespoke app will be used to monitor social media engagement.

Half of the participants will face specific time limits on certain apps like TikTok, Instagram, and YouTube, with no restrictions on messaging apps like WhatsApp. “Total usage will be capped at one hour a day, with a curfew from 9 PM to 7 AM,” explains Dan Lewar from the Bradford Health Data Science Center, who co-leads the trial. This is significant, considering that the average social media usage for this age group is about three hours daily.

Importantly, participants will be randomized by grade level, allowing 8th graders to serve as the control group while 9th graders undergo restrictions. The aim is to create similar circumstances for both groups. “If a child’s social media is restricted, but their friends are active online post-curfew, they may feel excluded,” Orben explains.

Lewar emphasizes that the trial was designed collaboratively with teens. “They opposed a blanket ban,” he notes.

The comprehensive study will span six weeks around October, with preliminary results anticipated in mid-2027.

Orben emphasizes that this trial will yield more precise data on teenage social media habits through app monitoring rather than relying on self-reported information. The team will also gather data on anxiety, sleep quality, socializing, happiness, body image, school absenteeism, and experiences of bullying.

Etchells asserts the necessity of understanding whether restrictions or bans are beneficial or detrimental to youth. “The honest answer is we don’t know. That’s why research like this is critical.”

This initiative is welcomed due to the absence of high-quality studies in this area. A recent report from the UK Department for Science, Innovation, and Technology highlighted the need for quality causal evidence linking young people’s mental health to digital technology use, especially concerning social media, smartphones, and AI chatbots.

As stated by Margarita Panayiotou from the University of Manchester, engaging with youth is essential in social media research. Her findings show that teens often find ways to circumvent outright bans, making testing restrictions a more viable option. This approach may also be more ethical, as the harm caused by a ban is not yet understood.

“Teens view social media as a space for self-discovery,” says Panayiotou, highlighting concerns about platform distrust, feelings of loss of control, and unintentional overuse. They also report struggles with online judgment, body comparisons, and cyberbullying.

According to Etchells and Panayiotou, the primary challenge for governments is to compel tech companies to ensure safer social media environments for youth.

The Online Safety Act 2023 (OSA) mandates that technology firms like TikTok, Facebook, WhatsApp, and Instagram (owned by Meta), as well as Google (which owns YouTube), enhance user safety. “Effective enforcement of OSA could address many existing issues,” asserts Etchells.

Topics:

  • Mental Health/
  • Social Media

Source: www.newscientist.com

Australia’s Social Media Ban Encounters Immediate Challenges and Criticism

Instagram alerts that accounts for users under 16 will be terminated

Stringer/AFP (via Getty Images)

Australia’s groundbreaking social media restrictions on users under 16 have officially started, unveiling some contentious issues from the inaugural day of the new law. Notably, some minors managed to sidestep age verification measures intended to prevent them from accessing their accounts.

This initiative has garnered backing from numerous parents who hope it will mitigate online harassment, promote outdoor activities, and lessen exposure to inappropriate material. However, critics argue that the ban may be ineffective or even counterproductive, as highlighted by a variety of satirical memes.

Andrew Hammond, associated with KJR, a consultancy in Canberra where he oversaw age verification initiatives for the Australian government, is keenly observing how the current situation evolves. He mentioned having spoken to several parents of children covered by the ban, none of whom had lost access to their accounts yet. “Some have reported they circumvented it or haven’t yet been prompted,” Hammond stated, though he anticipates more accounts will be disabled next week.

Meta, the parent company of Instagram and Facebook, has initiated account removals about a week ago. A spokesperson affirmed, “As of today, we have disabled all accounts confirmed to be under 16.” They confirmed, “As the social media ban in Australia takes effect, we will preclude access to Instagram, Threads, and Facebook for teenagers known to be under this age and will restrict newcomers under 16 from setting up accounts.”

While Meta did not disclose the specific number of accounts terminated, a representative referred to earlier data indicating that approximately 150,000 users aged 13 to 15 are active on Facebook, and around 350,000 on Instagram in Australia. This implies that at least half a million accounts belonging to young Australians have been deleted on these two platforms alone.

The company stated its dedication to fulfilling its legal responsibilities, yet many concerns voiced by community organizations and parents have already manifested on the first day of the ban. These include risk of isolating vulnerable youth from supportive online communities, nudging them towards lesser-regulated apps and web areas, irregular age verification practices, and minimal concern for compliance among numerous teenagers and their parents, according to the spokesperson.

Mr. Hammond raised further questions, particularly regarding the status of minors under 16 who are vacationing or studying in Australia. The government has clarified that this regulation applies equally to visiting minors. While Australian accounts have been deleted, Mr. Hammond suspects that visitors’ accounts may simply be momentarily suspended. “It’s been merely a few hours since the ban was enacted, so there remains substantial uncertainty about its implementation,” he stated.

Australia and other nations are closely monitoring the repercussions as the law is fully enforced. “We will soon discover how attached minors under 16 are to social media and the actual situation that unfolds,” he said. He speculated that perhaps “they will venture outside to play sports.” Nonetheless, he warned, “if their lives are deeply intertwined with it, we may witness a plethora of attempts to evade these restrictions.”

Topics:

Source: www.newscientist.com

How Australian Teens Are Finding Ways to Navigate Social Media Bans

SEI 276876107

Australia will restrict social media use for individuals under 16 starting December 10th.

Mick Tsikas/Australian Associated Press/Alamy

A historic initiative to prohibit all children under 16 from accessing social media is about to unfold in Australia, but teens are already pushing back.

Initially announced last November, this prohibition, proposed by Australian Prime Minister Anthony Albanese, will commence on December 10th. On this date, all underaged users of platforms like Instagram, Facebook, TikTok, YouTube, and Snapchat will have their accounts removed.

Companies operating social media platforms may incur fines up to A$49.5 million (£25 million) if they do not comply by expelling underage users. Nonetheless, neither parents nor children face penalties.

This regulation is garnering global attention. The European Commission is considering a similar rule. So far, discussions have centered on implementation methods, potential age verification technologies, and the possible adverse effects on teens who depend on social media to engage with their peers.

As the deadline approaches, teens preparations are underway to defy these restrictions. A significant illustration is of two 15-year-old boys from New South Wales, Noah Jones and Macy Neyland, who are challenging the social media ban in the nation’s highest court.

“The truth is, kids have been devising ways to bypass this ban for months, but the media is only catching on now that the countdown has begun,” Jones remarked.

“I know kids who stash their family’s old devices in lockers at school. They transferred the account to a parent or older sibling years ago and verified it using an adult ID without their parents knowing. We understand algorithms, so we follow groups with older demographics like gardening or walking for those over 50. We engage in professional discussions to avoid detection.”

Jones and Neyland first sought an injunction to postpone the ban but opted instead to present their opposition as a specific constitutional challenge.

On December 4, they secured a crucial victory as the High Court of Australia agreed to hear their case as early as February. Their primary argument contends that the ban imposes an undue burden on their implied freedom of political speech. They argue this policy would compromise “significant zones of expression and engagement in social media interactions for 13- to 15-year-olds.”

Supported by the Digital Freedom Project, led by New South Wales politician John Ruddick, the duo is rallying for their cause. “I’ve got an 11-year-old and a 13-year-old, and they’ve been mentioning for months that it’s a hot topic on the playground. They’re all active on social media, reaping its benefits,” Ruddick shared.

Ruddick noted that children are already brainstorming methods to circumvent the ban, exploring options like virtual private networks (VPNs), new social media platforms, and tactics to outsmart age verification processes.

Katherine Page Jeffrey, a researcher at the University of Sydney, mentioned that the impending ban is starting to feel tangible for teenagers. “Up until now, it seems young people hadn’t quite believed that this was actually happening,” she commented.

She adds that her children have already begun discussing alternatives with peers. Her younger daughter has downloaded another social media app called Yope, which is not listed on the government’s watch list yet, unlike several others like Coverstar and Lemon8 that have been warned to self-regulate.

Lisa Given, a researcher at RMIT University in Melbourne, believes that as children drift to newer, less known social media platforms, parents will struggle to monitor their children’s online activities. She speculated that many parents may even assist their children in passing age verification hurdles.

Susan McLean, a foremost cybersecurity expert in Australia, argued that this situation will lead to a “whack-a-mole” scenario as new apps emerge, kids flock to them, and the government continually adds them to the banned list. She insists that rather than taking social media away from teenagers, governments should compel large companies to rectify algorithms that expose children to inappropriate content.

“The government’s logic is deeply flawed,” she pointed out. “You can’t prohibit a pathway to safety unless you ban all communications platforms for kids.”

McLean shared a poignant quote from a teenager who remarked, “If the aim of this ban is to protect children from harmful adults, why should I have to leave while those harmful adults remain?”

Noah Jones, one of the teen complainants, stated it bluntly: “There’s no greater news source than what you can find in just 10 minutes on Instagram,” he insisted. “Yet, we faced bans while perpetrators went unpunished.”

Topic:

Source: www.newscientist.com

Teens Seek Alternatives to Australia’s Social Media Ban: Where Will They Turn?

As Australia readies itself to restrict access to 10 major social media platforms for users under 16, lesser-known companies are targeting the teen demographic, often engaging underage influencers for promotional content.

“With a social media ban on the horizon, I’ve discovered a cool new app we can switch to,” stated one teenage TikTok influencer during a sponsored “collaboration” video on the platform Coverstar.

New social media regulations in Australia will take effect, effectively prohibiting all users under 16 from accessing TikTok, Instagram, Snapchat, YouTube, Reddit, Twitch, Kick, and X starting December 10.

It remains uncertain how effective this ban will be, as numerous young users may attempt to bypass it. Some are actively seeking alternative social media platforms.

Sign up: AU breaking news email

Alongside Coverstar, other lesser-known apps like Lemon8 and Yope have recently surged in popularity, currently sitting at the top two spots in Apple’s lifestyle category in Australia.


The government has stated that the list of banned apps is “dynamic,” meaning additional platforms may be added over time. Experts have voiced concerns that this initiative might lead to a game of “whack-a-mole,” pushing children and teens into less visible corners of the internet.

Dr. Catherine Page-Jeffrey, a specialist in digital media and technology at the University of Sydney, remarked, “This legislation may inadvertently create more dangers for young people. As they migrate to less regulated platforms, they might become more secretive about their social media activities, making them less likely to report troubling content or harmful experiences to their parents.”

Here’s what we know about some of the apps that kids are opting for.

Coverstar

Coverstar, a video-sharing app based in the U.S., identifies itself as “a new social app for Generation Alpha that emphasizes creativity, utilizes AI, and is deemed safer than TikTok.” Notably, it is not subject to the social media ban and currently holds the 45th position in Apple’s Australian download rankings.


A screenshot from Yope reveals that the Guardian was able to set up an account for a fictitious four-year-old named Child Babyface without needing parental consent. Photo: Yope

Children as young as 4 can use this platform to livestream, post videos, and comment. For users under 13, the app requires them to record themselves stating, “My name is ____. I give you permission to use Coverstar,” which the app then verifies. Adults are also permitted to create accounts, post content, and engage in comments.

Similar to TikTok and Instagram, users can spend real money on virtual “gifts” for creators during live streams. Coverstar also offers a “premium” subscription featuring additional functionalities.

The app highlights its absence of direct messaging, adherence to an anti-bullying policy, and constant monitoring by AI and human moderators as key safety measures.

Dr. Jennifer Beckett, an authority on online governance and social media moderation at the University of Melbourne, raised concerns regarding Coverstar’s emphasis on AI: “While AI use is indeed promising, there are significant limitations. It’s not adept at understanding nuance or context, which is why human oversight is necessary. The critical question is: how many human moderators are there?”

Coverstar has been reached for comments.

Lemon8

Lemon8, a photo and video sharing platform reminiscent of Instagram and owned by TikTok’s parent company, ByteDance, has experienced a notable rise in user engagement recently.

Users can connect their TikTok accounts to easily transfer content and follow their favorite TikTok creators with a single click.

However, on Tuesday, Australian eSafety Commissioner Julie Inman-Grant revealed that her office has advised Lemon8 to conduct a self-assessment to ascertain if it falls under the new regulations.

Yope

With only 1,400 reviews on the Apple App Store, Yope has emerged as a “friends-only private photo messaging app” that is positioned as an alternative to Snapchat after the ban.

Skip past newsletter promotions

Bahram Ismailau, co-founder and CEO of Yope, described the company as “a small team dedicated to creating the ideal environment for teenagers to share images with friends.”

Similar to Lemon8, Australia’s eSafety Commissioner also reached out to Yope, advising a self-assessment. Ismailau informed the Guardian that he had not received any communication but is “prepared to publicly express our overall eSafety policy concerning age-restricted social media platforms.”

He claimed that after conducting a self-assessment, Yope determines it fully meets the law’s exemption for apps designed solely for messaging, email, video calls, and voice calls.


Australian government adds Reddit and Kick to social media ban for under-16s – video


“Yope functions as a private photo messenger devoid of public content,” asserted Ismailau. “It’s comparable in security to iMessage or WhatsApp.”

According to Yope’s website, the app is designed for users aged 13 and above, with those between 13 and 18 required to engage a parent or guardian. However, the Guardian successfully created an account for a fictitious four-year-old named Child Babyface without needing parental consent.

A mobile number is mandatory for account creation.

Ismailau did not address inquiries about under-13 accounts directly but confirmed that plans are underway to update the privacy policy and terms of service to better reflect the app’s actual usage and intended audience.


Red Note

The Chinese app Red Note, also referred to as Xiaohongshu, attracted American users when TikTok faced a temporary ban in the U.S. earlier this year.

Beckett noted that the app might provide a safe space, considering that “Social media is heavily regulated in China, which is reflected in the content requiring moderation.”

“Given TikTok’s previous issues with pro-anorexia content, it’s clear that the platform has faced its own challenges,” she added.

Nonetheless, cybersecurity experts highlight that the app collects extensive personal information and could be legally obligated to share it with third parties, including the Chinese government.

Despite the increasing number of restricted social media services, specialists assert that governments are underestimating children’s eagerness to engage with social media and their resourcefulness in doing so.

“We often overlook the intelligence of young people,” Beckett remarked. “They are truly adept at finding ways to navigate restrictions.”

Anecdotal evidence suggests that some kids are even exploring website builders to create their own forums and chat rooms; alternatives include using shared Google Docs for communication.

“They will find ways to circumvent these restrictions,” Beckett asserted. “They will be clever about it.”




Source: www.theguardian.com

YouTube Aligns with Australia’s Under-16 Social Media Ban; Lemon8 Implements Access Restrictions

YouTube will fall under the federal government’s ban on social media for users under 16, but its parent company Google has stated that the law “fails to ensure teens’ safety online” and “misunderstands” the way young people engage with the internet.

Communications Minister Annika Wells responded by emphasizing that YouTube must maintain a safe platform, describing Google’s concerns as “absolutely bizarre.”

In a related development, Guardian Australia has reported that Lemon8, a recently popular social media app not affected by the ban, will implement a restriction of users to those over 16 starting next week. The eSafety Commissioner has previously indicated that the app will be closely scrutinized for any potential bans.


Before Mr. Wells’ address at the National Press Club on Wednesday, Google announced it would start signing out minor users from its platform on December 10. However, the company cautioned that this might result in children and their parents losing access to safety features.

Initially, Google opposed the inclusion of YouTube, which had been omitted from the framework, in the ban and hinted it might pursue legal action. Nevertheless, the statement released on Wednesday did not provide further details on that front, and Google officials did not offer any comments.

Rachel Lord, Google’s senior manager of Australian public policy, stated in a blog post that users under 16 could view YouTube videos while logged out, but they would lose access to features that require signed-in accounts, such as “subscriptions, playlists, likes,” and standard health settings like “breaks” and bedtime reminders.

Additionally, the company warned that parents “will no longer be able to manage their teens’ or children’s accounts on YouTube,” including blocking certain channels in content settings.

Mr. Lord commented, “This rushed regulation misunderstands our platform and how young Australians use it. Most importantly, this law does not fulfill its promise of making children safer online; rather, it will render Australian children less safe on YouTube.”

While Lord did not address potential legal actions, they expressed commitment to finding more effective methods to safeguard children online.

Wells mentioned at the National Press Club that parents could adjust controls and safety settings on YouTube Kids, which is not included in the ban.

“It seems odd that YouTube frequently reminds us how unsafe the platform is when logged out. If YouTube asserts that its content is unsuitable for age-restricted users, it must address that issue,” she remarked.




Annika Wells will address the National Press Club on Wednesday. Photo: Mick Tsikas/AAP

Mr. Wells also acknowledged that the implementation of the government’s under-16 social media ban could take “days or even weeks” to properly enforce.

“While we understand it won’t be perfect immediately, we are committed to refining our platform,” Wells stated.

Wells commended the advocacy of families affected by online bullying or mental health crises, asserting that the amendments would “shield Generation Alpha from the peril of predatory algorithms.” She suggested that social media platforms intentionally target teens to maximize engagement and profits.

“These companies hold significant power, and we are prepared to reclaim that authority for the welfare of young Australians beginning December 10,” asserted Mr. Wells.

Sign up: AU breaking news email

Meta has informed users of Facebook, Instagram, and Threads, along with Snapchat, about forthcoming changes. Upon reaching out to Guardian Australia, a Reddit spokesperson mentioned that they had no new information. Meanwhile, X, TikTok, YouTube, and Kick have not publicly clarified their compliance with the law nor responded to inquiries.

Skip past newsletter promotions

Platforms that do not take appropriate measures to exclude users under 16 may incur fines of up to $50 million. Concerns have been raised about the timing and execution of the ban, including questions about the age verification process, and at least one legal challenge is in progress.


The government believes it is essential to signal to parents and children the importance of avoiding social media, even if some minors may manage to bypass the restrictions.

Wells explained that it would take time to impose $50 million fines on tech companies, noting that the e-safety commissioner will request information from platforms about their efforts to exclude underage users starting December 11, and will scrutinize data on a monthly basis.

At a press conference in Adelaide on Tuesday, Mr. Wells anticipated that additional platforms would be included in the under-16 ban if children were to migrate to sites not currently on the list.

She advised the media to “stay tuned” for updates regarding the Instagram-like app Lemon8, which is not subject to the ban. Guardian Australia understands that the eSafety Commission has communicated with Lemon8, owned by TikTok’s parent company, ByteDance, indicating that the platform will be monitored for potential future inclusion once the plan is enacted.

Guardian Australia can confirm that Lemon8 will restrict its user base to those over 16 starting December 10.

“If platforms like LinkedIn become hubs of online bullying, targeting 13- to 16-year-olds and affecting their mental and physical health, we will address that issue,” Wells stated on Tuesday.

“That’s why all platforms are paying attention. We need to be prompt and flexible.”

Australian crisis support services lifeline is available at 13 11 14. In the UK and Ireland, you can reach Samaritan via freephone 116 123 or by email at jo@samaritans.org or jo@samaritans.ie. In the US, contact the 988 Lifeline for suicide and crisis at 988 or via chat at 988lifeline.org. For further international helplines, visit: befrienders.org




Source: www.theguardian.com

Instagram’s Age Verification: Adults with Mustaches Over 16—But What About 13-Year-Olds?

Instagram’s method for confirming if a user surpasses 16 years old is fairly straightforward, especially when the individual is evidently an adult. However, what occurs if a 13-year-old attempts to alter their birth date to seem older?

In November, Meta informed Instagram and Facebook users whose birth dates are registered as under 16 that their accounts would be disabled as part of Australia’s prohibition on social media use for children. This rule will take effect on December 10, with Meta announcing that access for users younger than 16 will start being revoked from December 4.

Subscribe: AU breaking news email

Dummy social media accounts were created on phones as part of Guardian Australia’s investigation into what content different age groups access on the platform.




Instagram notification sent to a test account with an age set to 15. Photo: Instagram/Meta

One account was created on Instagram with the age set at 15 to observe the impact of the social media ban for users under 16. Instagram later stated: “Under Australian law, you will soon be unable to use social media until you turn 16.”

“You cannot use an Instagram account until you’re 16, which means your profile will not be visible to you or anyone else until that time.”

“We’ll inform you when you can access Instagram again.”




Notice informing that test account users will lose access due to the Australian social media ban. Photo: Instagram/Meta

The account was then presented with two choices: either download account data and deactivate until the user is 16, or verify their date of birth.




Instagram notification sent to test account set to age 15 regarding date of birth review options. Photo: Instagram/Meta

The second option enables users to submit a “video selfie” to validate that the account holder is older than 16. The app activated the front-facing camera and prompted the adult test user, distinguished by a thick beard, to shift their head side to side. This resembles the authentication method used for face unlock on smartphones.




Explanation on how the “Video Selfie” feature estimates the user’s age. Photo: Instagram/Meta

The notification indicated that the verification process usually takes 1-2 minutes, but may extend up to 48 hours.




Notification sent to the test account following the date of birth verification request. Photo: Instagram/Meta

The app promptly indicated that accounts created by adult test users were recognized as 16 years or older.




A notification confirming the user’s date of birth was updated by Instagram. Photo: Instagram/Meta

In another test, a 13-year-old boy created a fresh account on his mobile device, avoiding installing Instagram and using a birth date that clearly suggested he was under 16. There was no immediate alert regarding the upcoming social media ban.

When the child attempted to change their date of birth to reflect an adult age, the same video selfie facial age estimation process was performed.

Skip past newsletter promotions

Within a minute, it replied, “We couldn’t verify your age,” and requested a government-issued ID for date of birth verification.

Facial age testing during the Age Assurance Trial revealed that individuals over 21 were generally much less prone to being misidentified as under 16. Meanwhile, those closer to 16 years of age and minorities experienced higher rates of false positives and negatives.


Meta may have already assessed users who haven’t been notified as 18 years or older, utilizing data such as birth date, account lifespan, and other user activity.

A Meta representative mentioned that the experiment demonstrated that the process functions as expected, with “adult users being capable of verifying their age and proceeding, while users under 16 undergo an age check when attempting to alter their birth date.”

“That said, we must also recognize the findings of the Age Assurance Technical Examination, which highlights the specific difficulties of age verification at the 16-year threshold and anticipates that the method may occasionally be imperfect,” the spokesperson added.

Last month, Communications Minister Annika Wells acknowledged the potential challenges confronting the implementation of the ban.

“We recognize that this law isn’t flawless, but it is essential to ensure that there are no gaps,” she stated.

Meta collaborates with Yoti for age verification services. The company asserts on its website that facial images will be destroyed once the verification process concludes.

The ban impacts Meta’s Facebook, Instagram, and Threads platforms, as well as others such as Kick, Reddit, Snapchat, TikTok, Twitch, X, and YouTube.




Source: www.theguardian.com

How Major Tech Firms Are Cultivating Media Ecosystems to ‘Shape the Online Narrative’

The introduction to tech mogul Alex Karp’s interview on Sourcely, a YouTube show by the digital finance platform Brex, features a mix of him waving the American flag accompanied by a remix of AC/DC’s “Thunderstruck.” While strolling through the company’s offices, Karp avoided questions about Palantir’s contentious ties with ICE, focusing instead on the company’s strengths while playfully brandishing a sword and discussing how he re-buried his childhood dog Rosita’s remains near his current residence.

“It’s really lovely,” comments host Molly O’Shea as she engages with Karp.

For those wanting insights from key figures in the tech sector, platforms like Sourcery provide a refuge for an industry that’s increasingly cautious, if not openly antagonistic, towards critical media. Some new media initiatives are driven by the companies themselves, while others occupy niches favored by the tech billionaire cohort. In recent months, prominent figures like Mark Zuckerberg, Elon Musk, Sam Altman, and Satya Nadella have participated in lengthy, friendly interviews, with companies like Palantir and Andreessen Horowitz launching their own media ventures this year.

A significant portion of Americans harbor distrust towards big tech and believe artificial intelligence is detrimental to society. Silicon Valley is crafting its own alternative media landscape, where CEOs, founders, and investors take center stage. What began as a handful of enthusiastic podcasters has evolved into a comprehensive ecosystem of publications and shows, supported by some of the leading entities in tech.

Pro-tech influencers, such as podcast host Rex Fridman, have historically fostered close ties with figures like Elon Musk, yet some companies this year opted to eliminate intermediaries entirely. In September, venture capital firm Andreessen Horowitz introduced the a16z blog on Substack. Notable author Katherine Boyle highlighted her longstanding friendship with JD Vance. This podcast has surged to over 220,000 subscribers on YouTube, featuring OpenAI CEO Sam Altman last month. Andreessen Horowitz is a leading investor.

“What if the future of media is shaped not by algorithms or traditional bodies, but by independent voices directly interacting with audiences?” the company posited in its Substack announcement. Previously, it invested $50 million into digital media startup BuzzFeed with a similar ambition, which ultimately fell to penny stock levels.

The a16z Substack also revealed this month its new eight-week media fellowship aimed at “operators, creators, and storytellers shaping the future of media.” This initiative involves collaboration with a16z’s new media team, characterized as a collective of “online legends” aiming to furnish founders with the clout, flair, branding, expertise, and momentum essential for winning the online narrative.

In parallel to a16z’s media endeavors, Palantir launched a digital and print journal named Republic earlier this year, emulating the format of academic journals and think tank publications like Foreign Affairs. The journal is financially backed by the nonprofit Palantir Foundation for Defense Policy and International Affairs, headed by Karp, who reportedly contributes just 0.01 hours a week, as per his 2023 tax return.

“Too many individuals who shouldn’t have a voice are amplified, while those who ought to be heard are sidelined,” remarked Republic, which boasts an editorial team comprised of high-ranking Palantir executives.

Among the articles featured in Republic is a piece criticizing U.S. copyright restrictions for hindering AI leadership, alongside another by two Palantir employees reiterating Karp’s affirmation that Silicon Valley’s collaboration with the military benefits society at large.

Republic joins a burgeoning roster of pro-tech outlets like Arena Magazine, launched late last year by Austin-based venture capitalist Max Meyer. Arena’s motto nods to “The New Needs Friends” line from Disney’s Ratatouille.

“Arena avoids covering ‘The News.’ Instead, we spotlight The New,” reads the editor’s letter in the inaugural issue. “Our mission is to uplift those incrementally, or at times rapidly, bringing the future into the present.”

This sentiment echoes that of founders who have taken issue with publications like Wired and TechCrunch for their overly critical perspectives on the industry.

“Historically, magazines that covered this sector have become excessively negative. We plan to counter that by adopting a bold and optimistic viewpoint,” Meyer stated during an appearance on Joe Lonsdale’s podcast.

Certain facets of emerging media in the tech realm weren’t established as formal corporate media extensions but rather emerged organically, even while sharing a similarly positive tone. The TBPN video podcast, which interprets the intricacies of the tech world as high-stakes spectacles akin to the NFL Draft, has gained swift influence since its inception last year. Its self-aware yet protective atmosphere has drawn notable fans and guests, including Meta CEO Mark Zuckerberg, who conducted an in-person interview to promote Meta’s smart glasses.

Another podcaster, 24-year-old Dwarkesh Patel, has built a mini-media empire in recent years with extensive collaborative discussions featuring tech leaders and AI researchers. Earlier this month, Patel interviewed Microsoft CEO Satya Nadella and toured one of the company’s newest data facilities.

Skip past newsletter promotions

Among the various trends in the tech landscape, Elon Musk has been a pioneer in adopting this method of pro-tech media engagement. Following his acquisition of Twitter in 2022, the platform has restricted links to key news entities and established auto-responses with poop emojis for reporter inquiries. Musk conducts few interviews with mainstream media yet engages in extensive discussions with friendly hosts like Rex Fridman and Joe Rogan, facing minimal challenge to his viewpoints.

Musk’s inclination to cultivate a media bubble around himself illustrates how such content can foster a disconnect from reality and promote alternative facts. His long-standing criticism of Wikipedia spurred him to create Grokipedia, an AI replica generating blatant falsehoods and results aligning with his far-right perspective. Concurrently, Musk’s chatbot Grok has frequently echoed Musk’s opinions, even going to absurd lengths to flatter him, such as asserting last week that Musk is healthier than LeBron James and could defeat Mike Tyson in a boxing match.

The emergence of new technology-centric media is part of a broader transformation in how celebrities portray themselves and the access they grant journalists. The tech industry has a historical aversion to media scrutiny, a trend amplified by scandals like the Facebook Files, which unveiled internal documents and potential harms. Journalist Karen Hao exemplified the tech sector’s sensitivity to negative press, noting in her 2025 book “Empire of AI” that OpenAI refrained from engaging with her for three years after a critical article she wrote in 2019.

The strategy of tech firms establishing their own autonomous and resonant media mirrors the entertainment sector’s approach from several years back. Press tours for film and album promotions have historically been tightly monitored, with actors and musicians subjected to high-pressure interviews judged by shows like “Hot Ones.” Political figures are adopting a similar framework, granting them access to fresh audiences and a more secure environment for self-promotion, as showcased by President Donald Trump’s 2024 campaign engaging with podcasters like Theo Fung, and California Governor Gavin Newsom’s introduction of his own political podcast this year.

While much of this emerging media does not aim to unveil misconduct or confront the powerful, it still holds certain merits. The content produced by the tech sector often reflects the self-image of its elite and the world they aspire to create, within an industry characterized by minimal government oversight and fewer probing inquiries into operational practices. Even the simplest of questions offer insights into the minds of individuals who primarily inhabit secured boardrooms and gated environments.

“If you were a cupcake, what kind would you be?” O’Shea queried Karp about Brex’s sauces.

“I prefer not to be a cupcake, as I don’t want to be consumed,” Karp replied. “I resist being a cupcake.”

quick guide

Contact us about this story





show


The best public interest journalism relies on first-hand reporting from those in the know.

If you have something to share regarding this matter, please contact us confidentially using the methods below.

Secure messaging in the Guardian app

The Guardian app has a tool to submit story tips. Messages are end-to-end encrypted, concealed within the daily activities of all Guardian mobile apps, preventing observers from knowing that you are in communication with us.

If you don’t have the Guardian app, please download it (iOS / android) and go to the menu. Select “Secure Messaging.”

SecureDrop, instant messenger, email, phone, mail

If you can safely utilize the Tor network without being monitored, you can send messages and documents to the Guardian via our SecureDrop platform.

Lastly, our guide at theguardian.com/tips lists multiple ways to contact us securely, discussing the advantages and disadvantages of each.

Illustration: Guardian Design/Rich Cousins

Thank you for your feedback.


Source: www.theguardian.com

European Parliament Advocates Prohibition of Social Media for Those Under 16

The European Parliament has proposed that children under the age of 16 should be prohibited from using social media unless their parents grant permission.

On Wednesday, MEPs overwhelmingly approved a resolution concerning age restrictions. While this resolution isn’t legally binding, the urgency for European legislation is increasing due to rising concerns about the mental health effects on children from unfettered internet access.

The European Commission, responsible for setting EU laws, is already exploring the option of a social media ban for those under 16 in Australia, anticipated to commence next month.

Commission Chair Ursula von der Leyen indicated in a September speech that she would closely observe the rollout of Australia’s initiative. She condemned “algorithms that exploit children’s vulnerabilities to foster addiction” and stated that parents often feel overwhelmed by “the flood of big tech entering our homes.”

Ms. von der Leyen pledged to establish an expert panel by the year’s end to provide guidance on effectively safeguarding children.

There’s increasing interest in limiting children’s access to social media and smartphones. A report commissioned by French President Emmanuel Macron last year recommended that children should not have smartphones until age 13 and should refrain from using social media platforms like TikTok, Instagram, and Snapchat until they turn 18.

Danish Social Democratic Party lawmaker Christel Schaldemose, who authored the resolution, stated that it’s essential for politicians to act in protecting children. “This is not solely a parental issue. Society must take responsibility to ensure that platforms are safe environments for minors, but only if they are above a specified age.”

Her report advocates for the automatic disabling of addictive elements like infinite scrolling, auto-playing videos, excessive notifications, and rewards for frequent use when minors access online platforms.

The resolution emphasizes that “addictive design features are typically integral to the business models of platforms, particularly social media.” An early draft of Schaldemose’s report referenced a study indicating that one in four children and young people exhibit “problematic” or “dysfunctional” smartphone use, resembling addictive behavior. It states that children should be 16 before accessing social media, although parents can consent from age 13.

The White House has urged the EU to retract its digital regulations, and supporters of the social media ban have contextualized their votes accordingly. U.S. Commerce Secretary Howard Lutnick mentioned at a meeting in Brussels that EU regulations concerning tech companies should be re-evaluated in exchange for reduced U.S. tariffs on steel and aluminum.

Stéphanie Yoncourtin, a French lawmaker from Macron’s party, responded to Lutnick’s visit, asserting that Europe is not a “regulatory colony.” After the vote, she remarked: “Our digital laws are not negotiable. We will not compromise child protections just because a foreign billionaire or tech giant attempts to influence us.”

The EU is already committed to shielding internet users from online dangers like misinformation, cyberbullying, and unlawful content through the Digital Services Act. However, the resolution highlights existing gaps in the law that need to be addressed to better protect children from online risks, such as addictive design features and financial incentives to become influencers.

Schaldemose acknowledged that the law, of which she is a co-author, is robust, “but we can enhance it further because we remain less specific and less defined, particularly in regards to addictive design features and harmful dark pattern practices.”

Skip past newsletter promotions

Dark patterns refer to design elements in apps and websites that manipulate user decisions, such as countdown timers pushing purchases or persistent requests to enable location tracking or notifications.

Schaldemose’s resolution was endorsed by 483 members, while 92 voted against it and 86 abstained.

Eurosceptic lawmakers criticized the initiative, arguing that it would overreach if the EU imposes a ban on children’s access to social media. “Decisions about children’s online access should be made as closely as possible to families in member states, not in Brussels,” stated Kosma Złotowski, a Polish member of the European Conservative and Reform Group.

The resolution was adopted just a week after the committee announced a delay in overhauling the Artificial Intelligence Act and other digital regulations that aim to relax rules for businesses under the guise of “simplification.”

Schaldemose acknowledged the importance of not overwhelming the legislative system, but added, “There is a collective will to do more regarding children’s protection in the EU.”

Source: www.theguardian.com

Ofcom Calls on Social Media Platforms to Combat Fraud and Curb Online ‘Pile-Ons’

New guidelines have urged social media platforms to curtail internet “pile-ons” to better safeguard women and girls online.

Ofcom, Britain’s communications regulator, implemented guidance on Tuesday aimed at tackling misogynistic abuse, coercive control, and the non-consensual sharing of intimate images, with a focus on minimizing online harassment of women.

The measures imply that tech companies will limit the number of responses to posts on platforms like X, a strategy Ofcom believes will lessen incidents where individual users are inundated with abusive responses.


Additional measures proposed by Ofcom include utilizing databases of images to prevent the non-consensual sharing of intimate photos—often referred to as ‘revenge porn’.

The regulator advocates for “hash matching” technology that helps platforms remove disputed images. This system cross-references user-reported images or videos with a database of illegal content, transforming them into “hashes” or digital identifiers, enabling the identification and removal of harmful images.

These recommendations were put forth under the Online Safety Act (OSA), a significant law designed to shield children and adults from harmful online content.

While the advice is not obligatory, Ofcom is urging social media companies to follow it, announcing plans to release a report in 2027 assessing individual platforms’ responses to the guidelines.

The regulator indicated that the OSA could be reinforced if the recommendations are not acted upon or prove ineffective.

“If their actions fall short, we will consider formally advising the government on necessary enhancements to online safety laws,” Ofcom stated.

Dame Melanie Dawes, Ofcom’s chief executive, has encountered “shocking” reports of online abuse directed at women and girls.


Melanie Dawes, Ofcom’s chief executive. Photo: Zuma Press Inc/Alamy

“We are sending a definitive message to tech companies to adhere to practical industry guidance that aims to protect women from the genuine online threats they face today,” Dawes stated. “With ongoing support from our campaigners, advocacy groups, and expert partners, we will hold companies accountable and establish new benchmarks for online safety for women and girls in the UK.”

Ofcom’s other recommendations suggest implementing prompts to reconsider posting abusive content, instituting “time-outs” for frequent offenders, and preventing misogynistic users from generating ad revenue related to their posts. It will also allow users to swiftly block or mute several accounts at once.

These recommendations conclude a process that started in February, when Ofcom conducted a consultation that included suggestions for hash matching. However, more than a dozen guidelines, like establishing “rate limits” on posts, are brand new.

Internet Matters, a nonprofit organization dedicated to children’s online safety, argued that governments should make the guidance mandatory, cautioning that many tech companies might overlook it. Ofcom is considering whether to enforce hash matching recommendations.

Rachel Huggins, co-chief executive of Internet Matters, remarked: “We know many companies will disregard this guidance simply because it is not legally binding, leading to continued unacceptable levels of online harm faced by women and girls today.”

Source: www.theguardian.com

Roblox Launches Age Verification Feature in Australia, Advocating Against Child Social Media Ban

Roblox maintains that Australia’s forthcoming social media restrictions for users under 16 should not extend to its platform, as it rolls out a new age verification feature designed to block minors from communicating with unknown adults.

The feature, which is being launched first in Australia, allows users to self-estimate their age using Persona age estimation technology built into the Roblox app. This utilizes the device’s camera to analyze facial features and provide a live age assessment.


This feature will become compulsory in Australia, the Netherlands, and New Zealand starting the first week of December, with plans to expand to other markets in early January.

After completing the age verification, users will be categorized into one of six age groups: under 9, 9-12, 13-15, 16-17, 18-20, or 21 and older.

Roblox has stated that users within each age category will only be able to communicate with peers in their respective groups or similarly aged groups.

Sign up: AU breaking news email

These changes were initially proposed in September and received positive feedback from Australia’s eSafety Commissioner, who has been in discussions with Roblox for several months regarding safety concerns on the platform, labeling this as a step forward in enhancing safety measures.

A recent Guardian Australia investigation revealed a week’s worth of virtual harassment and violence experienced by users who had set their profiles as eight years old while on Roblox.

Regulatory pressure is mounting for Roblox to be included in Australia’s under-16 social media ban, set to be implemented on December 10. Although there are exceptions for gaming platforms, Julie Inman-Grant stated earlier this month that eSafety agencies are reviewing chat functions and messaging in games.

“If online gameplay is the primary or sole purpose, would kids still utilize the messaging feature for communication if it were removed? Probably not,” she asserted.

During a discussion with Australian reporters regarding these impending changes, Roblox’s chief safety officer, Matt Kaufman, characterized Roblox as an “immersive gaming platform.” He explained, “I view games as a framework for social interaction. The essence lies in bringing people together and spending time with one another.”

When asked if this suggests Roblox should be classified as a social media platform subject to the ban, Kaufman responded that Roblox considers social media as a space where individuals post content to a feed for others to view.

“People return to look at the feed, which fosters a fear of missing out,” he elaborated. “It feels like a popularity contest that encapsulates social media. In contrast, Roblox is akin to two friends playing a game after school together. That’s not social media.”

“Therefore, we don’t believe that Australia’s domestic social media regulations apply to Roblox.”


When questioned if the new features were introduced to avoid being encompassed in the ban, Kaufman stated that the company is engaged in “constructive dialogue” with regulators and that these updates showcase the largest instance of a platform utilizing age verification across its entire user base.

Persona, the age verification company partnering with Roblox, Participating in Australian Age Guarantee Technology Trial. They reported a false positive rate of 61.11% for 15-year-olds identified as 16 years old and 44.25% for 14-year-olds.

Kaufman explained that the technology would likely be accurate within a year or two and that users who disagree with the assessment could correct it using a government ID or parental controls to establish an age. He assured that there are “strict requirements” for data deletion after age verification. Roblox states that ID images will be retained for 30 days for purposes such as fraud detection and then erased.

Users who opt not to participate in the age verification will still have access to Roblox, but they will be unable to use features like chat.

More than 150 million people globally engage with Roblox every day across 180 countries, including Australia. According to Kaufman, two-thirds of users are aged 13 and above.




Source: www.theguardian.com

Father of Teenager Killed Over Social Media Trusts Ofcom No More

Molly Russell’s father, the British teenager who tragically took her life after encountering harmful online material, has expressed his lack of confidence in efforts to secure a safer internet for children. He is advocating for a leadership change at Britain’s communications regulatory body.

Ian Russell, whose daughter Molly was only 14 when she died in 2017, criticized Ofcom for its “repeated” failure to grasp the urgency of safeguarding under-18s online and for not enforcing new digital regulations effectively.

“I’ve lost faith in Ofcom’s current leadership,” he shared with the Guardian. “They have consistently shown a lack of urgency regarding this mission and have not been willing to use their authority adequately.”

Mr. Russell’s remarks coincided with a letter from technology secretary Liz Kendall to Ofcom, expressing her “deep concern” over the gradual progress of the Online Safety Act (OSA), a groundbreaking law that lays out safety regulations for social media, search engines, and video platforms.

After his daughter’s death, Mr. Russell became a prominent advocate for internet safety and raised flags with Ofcom chief executive Melanie Dawes last year regarding online suicide forums accessible to UK users.

Ofcom opened an investigation into these forums after acquiring new regulatory authority under the OSA, and the site voluntarily restricted access to UK users.

However, Mr. Russell noted that the investigation seemed to be “stalled” until regulators intensified their scrutiny this month when it was revealed that UK users could still access the forums via undiscovered “mirror sites.”




Molly Russell passed away in 2017. Photo: P.A.

“If Ofcom can’t manage something this clear-cut, it raises questions about their competence in tackling other issues,” Mr. Russell stated.

In response, Ofcom assured Mr. Russell that they were continuously monitoring geo-blocked sites and indicated that a new mirror site had only recently come to their attention.

Mr. Russell voiced his agreement with Mr. Kendall’s frustrations over the slow implementation of additional components of the OSA, particularly stricter regulations for the most influential online platforms. Ofcom attributed the delays to a legal challenge from the Wikimedia Foundation, the organization that supports Wikipedia.

The regulator emphasized its “utmost respect” for bereaved families and cited achievements under its stewardship, such as initiating age verification on pornography websites and combating child sexual abuse content.

“We are working diligently to push technology firms to ensure safer online experiences for children and adults in the UK. While progress is ongoing, meaningful changes are occurring,” a spokesperson commented.

The Molly Rose Foundation, established by Molly’s family, has reached out to the UK government urging ministers to broaden legal mandates for public servant transparency to include tech companies.

In their letter, they requested Victims’ Minister Alex Davies-Jones to expand the Public Powers (Accountability) Bill, which introduces a “duty of honesty” for public officials.

This bill was prompted by critiques regarding the police’s evidence handling during the Hillsborough investigation, mandating that public entities proactively assist inquiries, including those by coroner’s courts, without safeguarding their own interests.

The foundation believes that imposing similar transparency requirements on companies regulated by the OSA would aid in preserving evidence in cases of deaths possibly linked to social media.

The inquest into Molly’s passing was postponed due to a conflict surrounding evidence presentation.

“This change fundamentally shifts the dynamic between tech companies and their victims, imposing a requirement for transparency and promptness in legal responses,” the letter asserted.

Recent legislative changes have granted coroners enhanced authority under the OSA to request social media usage evidence from tech companies and prohibit them from destroying sensitive data. However, the letter’s signatories contend that stricter measures are necessary.

More than 40 individuals, including members of Survivors for Online Safety and Meta whistleblower Arturo Bejar, have signed the letter.

A government spokesperson indicated that the legal adjustments empower coroners to request further data from tech firms.

“The Online Safety Act will aid coroners in their inquests and assist families in seeking the truth by mandating companies to fully disclose data when there’s a suspected link between a child’s death and social media use,” a spokesperson stated.

“As pledged in our manifesto, we’ve strengthened this by equipping coroners with the authority to mandate data preservation for inquest support. We are committed to taking action and collaborating with families and advocates to ensure protection for families and children.”


In the UK, you can contact the youth suicide charity Papyrus at 0800 068 4141 or email pat@papyrus-uk.org. For support, reach out to the Samaritans at freephone 116 123 or email jo@samaritans.org or jo@samaritans.ie. In the United States, contact the 988 Lifeline for suicide and crisis at 988 or chat. In Australia, you can reach Lifeline at 13 11 14. Other international helplines are available at: befrienders.org

Source: www.theguardian.com

EU Launches Investigation into Google’s ‘Demotion’ of News Media Commercial Content

The European Union has initiated an investigation into Google Search amid worries that the US tech giant may be “downgrading” commercial content from news media platforms.

The enforcement body of the bloc announced this move after monitoring revealed that various content produced in collaboration with advertisers and sponsors was ranked so low by Google that it essentially vanished from search results.

Officials from the European Commission indicated that this potentially unfair “loss of visibility and revenue” for media owners could stem from Google’s anti-spam policies.

According to the Digital Markets Act (DMA), which governs competition within the tech sector, Google is required to provide “fair, reasonable and non-discriminatory conditions for access to publishers’ websites in Google Search”.

Committee officials clarified that the investigation does not pertain to the overall indexing of newspapers or Google search coverage but focuses specifically on commercial content supplied by third parties.

Media collaborations with firms selling products and services, from seasonal items to apparel, are described as “normal business practices in the offline world” and should be supported in equitable online ecosystems like Google, according to the officials.

For instance, a newspaper may partner with Nike to offer discounts, but evidence suggested that Google Search “demoted the newspaper’s subdomains to the extent that users could no longer access them.” This situation would also negatively impact newspapers.

“We are concerned that Google’s policies do not facilitate fair, reasonable, and non-discriminatory treatment of news publishers in search results,” stated Teresa Rivera, European Commission vice-president for clean, fair, and competitive transition policy.

In the upcoming days, authorities will request publishers to present evidence regarding the effects on traffic and revenue resulting from the alleged violations of fair practices, according to the commission.

Rivera further remarked: “We will investigate to ensure news publishers are not losing essential revenue during a challenging time for the industry and to make certain that Google adheres to the Digital Markets Act.”

“We are taking measures today to guarantee that Digital Gatekeepers do not unreasonably hinder the ability of businesses relying on them to promote their products and services.”

In response, Google has criticized the EU investigation as “misguided” and “without merit”.

Skip past newsletter promotions

The company shared in a blog post: “Unfortunately, the investigation into our anti-spam efforts announced today is misguided and risks harming millions of users in Europe.

“And this investigation is without merit. German courts have already dismissed similar claims, ruling that our anti-spam policies were effective, reasonable, and applied consistently.”

The policy is designed to build “trustworthy results” and “combat deceptive billing tactics” that “degrade” the quality of Google search results.

The EU stated it took these actions to safeguard traditional media competing in online markets, especially after President Ursula von der Leyen recently highlighted in her State of the Union address that the media sector is at risk due to the growth of AI and other threats to media funding.

Officials emphasized that the investigation is part of a “routine violation” inquiry and could lead to penalties of up to 20% of Google’s revenue, although this would only occur if Google is found to be in “systematic violation.”

Source: www.theguardian.com

Misleading Social Media Drives Unnecessary Testosterone Visits to NHS Clinics for Men

Authorities warn that misinformation on social media is pushing men to NHS clinics for unnecessary testosterone treatments, exacerbating already strained waiting lists.

Testosterone therapy is a prescription-only treatment recommended under national guidelines for men who display clinically verified deficiencies, validated through symptoms or consistent blood tests.

However, a surge of viral content on platforms like TikTok and Instagram is promoting blood tests as a means to receive testosterone as a lifestyle supplement, marketing it as a cure for issues like low energy, diminished focus, and decreased libido.


Medical professionals warn that taking unwarranted testosterone can inhibit natural hormone production, result in infertility, and elevate risks for blood clots, heart disease, and mood disorders.

The increasing demand for online consultations is becoming evident in medical facilities.

Professor Channa Jayasena from Imperial College London and chair of the Endocrine Society Andrology Network noted that hospital specialists are witnessing a rise in men taking these private blood tests, often promoted through social media, and being inaccurately advised that they require testosterone.

“We consulted with 300 endocrinologists at a national conference, and they all reported seeing patients in these clinics weekly,” he said. “They’re overwhelming our facilities. We previously focused on adrenal conditions and diabetes, and it’s significantly affecting NHS services. We’re left wondering how to manage this situation.”

While advertising prescription medications is illegal in the UK, the Guardian discovered that several TikTok influencers collaborate with private clinics to promote blood tests legally marketed as part of testosterone therapy.




Advocates of testosterone replacement therapy, who boast large followings, receive compensation or incentives from private clinics to promote discount codes and giveaways. Photo: TikTok

Supporters of testosterone replacement therapy, amassing thousands of followers, are incentivized by private clinics to advertise discount offers and promotions to encourage men to assess their testosterone levels and possibly pursue treatment.

One popular post shows a man lifting weights, urging viewers: “Get your testosterone tested… DM me for £20 off.” Another video suggests that a free blood test is available as part of an incentive to “enhance” his performance.

The Guardian notified the Advertising Standards Authority about these posts for potentially violating regulations regarding prescription drugs, triggering an investigation by the oversight body.

Jayasena stated, “I recently attended the National Education Course for the Next Generation of Endocrine Consultants, where many expressed concerns about reproductive health and the escalating trend of men being pushed to boost their testosterone levels.”

He added: “Beyond just influencers, this issue is significant. Healthcare professionals are encountering patients who come in for private blood tests, possibly arranged through influencers, and being incorrectly advised by inexperienced medical personnel that they should commence testosterone therapy. This guidance is fundamentally flawed.”

In private clinics, the initial year of Testosterone Replacement Therapy (TRT) ranges from £1,800 to £2,200, covering medication, monitoring, and consultations.

Originally a specialized treatment for a limited group of men with clinically diagnosed hormone deficiencies, TRT is now increasingly viewed as a lifestyle or “performance enhancement” option. Online clinics are also offering home blood tests and subscription services, making such treatments more easily accessible outside conventional healthcare routes.




In private clinics, the initial year of comprehensive testosterone replacement therapy costs approximately £1,800 to £2,200. Photo: Ian Dewar/Alamy

These messages imply that diminished motivation, exhaustion, and aging signify “low T,” leading more men to seek testing and treatment, despite medical advice restricting TRT to individuals with confirmed hormonal deficiencies.

Professor Jayasena remarked: “There are specific clinical protocols dictating who should or shouldn’t consider testosterone therapy. Some symptoms, like erectile dysfunction, undeniably correlate with low testosterone, whereas others, like muscle mass or feeling down, do not. A man might express dissatisfaction with his muscle tone and be advised to get tested, yet evidence supporting the necessity of such testing remains scarce.”

“What’s particularly alarming is that some clinics are now administering testosterone to men with normal testosterone levels. Research shows there’s no benefit to testosterone levels exceeding 12 nmol/L. I have also received reports of clinics providing testosterone to individuals under 18, a significant demographic.”

He explained that unnecessary testosterone usage can lead to infertility: “It inhibits testicular function and the hormonal messages from the brain necessary for testicular health, compelling us to combine and administer other drugs to counteract this effect. This is akin to the strategies used by anabolic steroid users.”

TikTok has been approached for a comment.

Source: www.theguardian.com

Roblox Controversy: Experts and MPs Urge Online Gaming Platforms to Embrace Australia’s Under-16 Social Media Ban

Increasing concerns have been raised regarding the federal government’s need to tackle the dangers that children face on the widely-used gaming platform Roblox, following a report by Guardian Australia that highlighted a week of incidents involving virtual sexual harassment and violence.

While role-playing as an 8-year-old girl, the reporter encountered a sexualized avatar and faced cyberbullying, acts of violence, sexual assault, and inappropriate language, despite having parental control settings in place.

From December 10, platforms including Instagram, Snapchat, YouTube, and Kick will be under Australia’s social media ban preventing Australians under 16 from holding social media accounts, yet Roblox will not be included.

Independent councillor Monique Ryan labeled this exclusion as “unexplainable.” She remarked, “Online gaming platforms like Roblox expose children to unlimited gambling, cloned social media apps, and explicit content.”

At a press conference on Wednesday, eSafety Commissioner Julie Inman Grant stated that platforms would be examined based on their “singular and essential purpose.”

“Kids engaging with Roblox currently utilize chat features and messaging for online gameplay,” she noted. “If online gameplay were to vanish, would kids still use the messaging feature? Likely not.”

Sign up: AU breaking news email

“If these platforms start introducing features that align them more with social media companies rather than online gaming ones, we will attempt to intervene.”

According to government regulations, services primarily allowing users to play online games with others are not classified as age-restricted social media platforms.


Nonetheless, some critics believe that this approach is too narrow for a platform that integrates gameplay with social connectivity. Nyusha Shafiabadi, an associate professor of information technology at Australian Catholic University, asserts that Roblox should also fall under the ban.

She highlighted that the platform enables players to create content and communicate with one another. “It functions like a restricted social media platform,” she observed.

Independent MP Nicolette Boere urged the government to rethink its stance. “If the government’s restrictions bar certain apps while leaving platforms like Roblox, which has been called a ‘pedophile hellscape’, unshielded, we will fail to safeguard children and drive them into more perilous and less regulated environments,” she remarked.

Communications minister spokesperson Annika Wells mentioned that excluding Roblox from the teen social media ban does not imply that it is free from accountability under the Online Safety Act.

A representative from eSafety stated, “We can extract crucial safety measures from Roblox that shield children from various harms, including online grooming and sexual coercion.”

eSafety declared that by the year’s end, Roblox will enhance its Age Verification Technology, which restricts adults from contacting children without explicit parental consent and sets accounts to private by default for users under 16.

“Children under 16 who enable chat through age estimation will no longer be permitted to chat with adults. Alongside current protections for those under 13, we will also introduce parental controls allowing parents to disable chat for users between 13 and 15,” the spokesperson elaborated.

Should entities like Roblox not comply with child safety regulations, authorities have enforcement capabilities, including fines of up to $49.5 million.

Skip past newsletter promotions

eSafety stated it will “carefully oversee Roblox’s adherence to these commitments and assess regulatory measures in the case of future infractions.”

Joanna Orlando, an expert on digital wellbeing from Western Sydney University, pointed out that Roblox’s primary safety issues are grooming threats and the increasing monetization of children engaging with “the world’s largest game.”

She mentioned that it is misleading to view it solely as a video game. “It’s far more significant. There are extensive social layers, and a vast array of individuals on that platform,” she observed.

Green Party spokesperson Sarah Hanson-Young criticized the government for “playing whack-a-mole” with the social media ban.

“We want major technology companies to assume responsibility for the safety of children, irrespective of age,” she emphasized.

“We need to strike at these companies where it truly impacts them. That’s part of their business model, and governments hesitate to act.”

Shadow communications minister Melissa Mackintosh also expressed her concerns about the platform. She stated that while Roblox has introduced enhanced safety measures, “parents must remain vigilant to guard their children online.”

“The eSafety Commissioner and the government carry the responsibility to do everything within their power to protect children from the escalating menace posed by online predators,” she said.

A representative from Roblox stated that the platform is “dedicated to pioneering safety through stringent policies that surpass those of other platforms.”

“We utilize AI to scrutinize games for violating content prior to publication, we prohibit users from sharing images or videos in chats, and we implement sophisticated text filters designed to prevent children from disclosing personal information,” they elaborated.




Source: www.theguardian.com

Age Verification Hacking Firm Possibly Exposes ID Photos of Discord Users | Social Media

Photos of government IDs belonging to approximately 70,000 global Discord users, a widely used messaging and chat application amongst gamers, might have been exposed following a breach at the firm responsible for conducting age verification procedures.

Along with the ID photos, details such as users’ names, email addresses, other contact information, IP addresses, and interactions with Discord customer support could also have fallen prey to the hackers. The attacker is reportedly demanding a ransom from the company. Fortunately, full credit card information or passwords were not compromised.

The incident was disclosed last week, but news of the potential ID photo leak came to light on Wednesday. A representative from the UK’s Information Commissioner’s Office, which oversees data breaches, stated: “We have received a report from Discord and are assessing the information provided.”

The images in question were submitted by users appealing age-related bans via Discord’s customer service contractors, which is a platform that allows users to communicate through text, voice, and video chat for over a decade.


Some nations, including the UK, mandate age verification for social media and messaging services to protect children. This measure has been in effect in the UK since July under the Online Safety Act. Cybersecurity professionals have cautioned about the potential vulnerability of age verification providers, which may require sensitive government-issued IDs, to hackers aware of the troves of sensitive information.

Discord released a statement acknowledging: “We have recently been made aware of an incident wherein an unauthorized individual accessed one of Discord’s third-party customer service providers. This individual obtained information from a limited number of users who reached out to Discord through our customer support and trust and safety teams… We have identified around 70,000 users with affected accounts globally whose government ID photos might have been disclosed. Our vendors utilized those photos for evaluating age-related appeals.”

Discord requires users seeking to validate their age to upload a photo of their ID along with their Discord username to return to the platform.

Nathan Webb, a principal consultant at the British digital security firm Acumen Cyber, remarked that the breach is “very concerning.”

Skip past newsletter promotions

“Even if age verification is outsourced, organizations must still ensure the proper handling of that data,” he emphasized. “It is crucial for companies to understand that delegating certain functions does not relieve them of their obligation to uphold data protection and security standards.”

Source: www.theguardian.com

Trump Presents Allies with New Media Control Tool: Is It Murdoch’s TikTok? | US News

Last week, Donald Trump disclosed that the US and China are close to finalizing a deal to allow Tiktok to continue operating in the United States. While the specifics are not yet settled, the proposed agreement could see the owners of the most prominent cable television networks in the US assume control over the nation’s influential social media platforms. This arrangement would grant Trump’s billionaire allies significant influence over the vast and unique landscape of US media.

Here’s what we know: Trump stated that he has received provisional approval from China’s President Xi Jinping for a deal whereby the US version of Tiktok would gain a fresh set of domestic investors, spearheaded by the software giant Oracle. This arrangement would also protect Tiktok’s respected recommendation algorithm while enhancing its security.

Among the investors mentioned, Trump pointed out during a Fox News interview on Sunday that media tycoon Rupert Murdoch and his son Lachlan, CEO of Fox Corporation, are involved. Additionally, Michael Dell, the head of computer manufacturer Dell, is expected to take part as well.

Tiktok is reportedly set to appoint seven new board members, with six of them being American. Notably, it seems that Rupert Murdoch, Lachlan Murdoch, and Oracle’s Larry Ellison, along with the CEO of Paramount Skydance and Larry Ellison’s son, will occupy several of these positions.

Murdochs

Lachlan Murdoch, aged 54 and the son of 94-year-old Rupert, serves as the executive chair and CEO of Fox Corporation, the parent company of Fox News. Lachlan took over the company following a legal settlement with his brothers in September, one of whom, James, has distanced himself from their father’s conservative empire. The Tiktok deal might involve investments from Fox’s parent company rather than directly from Rupert or Lachlan, as reported by CNN.

“I hate to tell you this, but there’s a guy named Lachlan involved. Do you know who Lachlan is? It’s a very uncommon name, Lachlan Murdoch,” Trump remarked. “Rupert will likely join the group. I think they will be part of it. There are others involved as well. They are exceptional people, very well-known, and American patriots who love this country, so they’ll do a great job.”

If oversight of Tiktok happens, it would provide Elder Murdoch with new opportunities in technology—similar to how News Corp acquired MySpace for $580 million in 2005. While MySpace peaked as the most visited website in the US three years later, it was quickly overshadowed by Facebook, and as Bloomberg’s billionaire index indicates, Mark Zuckerberg is now worth ten times more than Rupert Murdoch.

Ellison

Trump seems to have a penchant for father-son duos. On the opposite end of Tiktok’s American boardroom, 81-year-old Larry Ellison, co-founder and CTO of Oracle, alongside his 42-year-old son David, the founder of Skydance Media, may play significant roles.

Larry Ellison holds about 40% of Oracle’s shares and has been a fixture in Silicon Valley, temporarily surpassing Elon Musk as the world’s richest person following Oracle’s impressive revenue report. He is also a longtime supporter of Trump, hosting a presidential fundraiser at his Southern California home in 2020, and is known for his luxurious lifestyle and ventures in Hawaii.

David Ellison’s company has made a significant mark in the entertainment sector, managing CBS, BET, Nickelodeon, Paramount+, and UK Channel 5. Following a recent acquisition of Paramount, which also produces the Mission: Impossible franchise, Paramount Skydance is reportedly planning a cash offer to acquire Warner Bros. Discovery, which owns CNN, HBO, DC Comics, and other major properties.

Leading up to this merger, CBS News reached a settlement over a lawsuit regarding ‘60 Minutes’, appointing Trump allies as ombudsmen and courting former New York Times columnist Bari Weiss as a potential leader for the channel’s revamped version. This could serve as a precursor to how David Ellison might manage Tiktok.

How Powerful Will They Become?

Should Tiktok’s deal and David Ellison’s acquisition of Warner Bros. Discovery proceed, the combined power of the Murdochs and Ellisons would be immense. They would control media outlets that engage both older and younger demographics, yielding significant authority and sway. The only age group potentially unaffected by their influence may be those on Tiktok, who are skeptical of their parents’ viewing trends.

Skip past newsletter promotions

Is such integration legal? The Federal Communications Commission’s website includes stringent anti-monopoly regulations regarding television broadcasting. Regulations do not specifically address Fox News Channel or CNN.

Nevertheless, such regulations are pertinent. What implications arise when the owner of the most powerful cable channels in the US also controls the nation’s critical social media platforms? Might this breach antitrust laws?

The answer may lie within the regulations, particularly surrounding changes made eight years ago that lifted the ban on owning both television stations and daily newspapers in the same market. This decision was based on the claim that entertainment, news, and information had diversified significantly within the modern media landscape.

If an individual owns a local television station and its newspapers, why shouldn’t billionaires be able to oversee extensive social networks and the country’s leading channels?

Examining the intricacies of FCC regulations may not be as crucial as Trump’s influence, which plays a significant role in high-level decisions by the US government. Trump’s administration has successfully influenced the FCC to facilitate deals that levy pressure on networks outside of his allies’ control. Recently, the Supreme Court ruled that Trump could dismiss the singular Democrat on the committee, with Secretary Brendan Carr’s role in overseeing Jimmy Kimmel’s program suspension being questioned.

The American media landscape is taking on a distinctly Republican hue as Trump’s Tiktok transaction unfolds. Nexstar, the largest owner of local US television stations, expressed alignment with Trump’s decision to halt Kimmel’s shows, mimicking local television giant Sinclair. Currently, CBS and CNN, two major news networks, may soon be following Fox’s conservative trajectory. Online, X has shifted from a diverse platform to a more conservative social network, and Tiktok may follow suit under a board approved by MAGA.

At this juncture, the Murdochs and Ellisons appear to be benefiting from Trump’s favor.

Source: www.theguardian.com

It Bans Social Media While Embracing AI, Leaving Teens Disappointed

While someone in their 70s didn’t serve in World War II, this statement holds truth, as even the oldest Scepter Agerians were born post-war. Yet, a cultural link persists between this demographic and the era of Vera Lynn and the Blitz.

When discussing parents and technology, similar misconceptions arise. The prevailing belief is that social media and the internet are a realm beyond the understanding of parents, prompting calls for national intervention to shield children from tech giants. This month, Australia plans to outline its forthcoming restrictions. However, the parents of today’s teens are increasingly digitally savvy, having grown up in the age of MySpace and Habbo Hotel. Why have we come to think that these individuals can’t comprehend how their kids engage with TikTok and Fortnite?

There are already straightforward methods for managing children’s online access, such as adjusting router configurations or mandating parental approval for app installations. Yet politicians seem to believe these tasks require advanced technical skills, resulting in overly broad restrictions. If you could customize your Facebook profile in college, fine-tuning some settings shouldn’t be beyond reach. Instead of asking everyone to verify their age and identity online, why not trust the judgment of parents?


If you customized your Facebook page in university, you should be able to tweak a few settings

Failing to adapt to generational shifts can lead to broader issues. Like veterans who narrowly focus on historical battles from the past, there’s a risk of misdirecting attention. While lawmakers clamp down on social media, they’re simultaneously rushing to embrace AI technologies that rely on sophisticated language models, which significantly affect today’s youth, leaving educators pondering how to create ChatGPT-proof assignments.

Rather than issuing outright bans, we should facilitate open discussions about emerging technologies, encompassing social media, AI, and their societal implications while engaging families in the conversation.

topic:

Source: www.newscientist.com

I Felt It Was My Destiny: Social Media Rumors Sparked Pregnancy Speculation, Leading to Unforeseen Consequences

I cannot recall the exact moment my TikTok feed presented me with a video of a woman cradling her stillborn baby, but I do remember the wave of emotion that hit me. Initially, it resembled the joyous clips of mothers holding their newborns, all wrapped up and snug in blankets, with mothers weeping—just like many in those postnatal clips. However, the true nature of the video became clear when I glanced at the caption: her baby was born at just 23 weeks. I was at 22 weeks pregnant. A mere coincidence.

My social media algorithms seemed to know about my pregnancy even before my family, friends, or doctor did. Within a day, my feed transformed. On both Instagram and TikTok, videos emerged featuring women documenting their journeys as if they were conducting pregnancy tests. I began to “like,” “save,” and “share” these posts, feeding the algorithm and indicating my interest, and it responded with more content. But it didn’t take long for the initial joy to be overtaken by dread.

The algorithm quickly adapted to my deepest fears related to pregnancy, introducing clips about miscarriage stories. In them, women shared their heartbreaking experiences after being told their babies had no heartbeat. Soon, posts detailing complications and horror stories started flooding my feed.

One night, after watching a woman document her painful birthing experience with a stillbirth, I uninstalled the app amidst tears. But I reinstalled it shortly after; work commitments and social habits dictated I should. I attempted to block unwanted content, but my efforts were mostly futile.

On TikTok alone, over 300,000 videos are tagged with “miscarriage,” and another 260,000 are linked under related terms. A specific video titled “Live footage of me finding out I had a miscarriage” has garnered almost 500,000 views, while fewer than 5 million have been dedicated to women giving birth to stillborns.

Had I encountered such content before pregnancy, I might have viewed the widespread sharing of these experiences as essential. I don’t believe individuals sharing these deeply personal moments are in the wrong; for some, these narratives could offer solace. Yet, amid the endless stream of anxiety-inducing content, I couldn’t shake the discomfort of the algorithm prioritizing such overwhelming themes.


“I ‘like,’ ‘save,’ and ‘share’ the content, feeding it into the system and prompting it to keep returning more”…Wheeler while pregnant. Photo by Kathryn Wheeler

When discussing this experience with others who were also pregnant at the same time, I found shared nods of understanding and similar narratives. They too recounted their personalized concoctions of fears, as their algorithms zeroed in on their unique anxieties. Our experiences felt radical as we were bombarded with such harrowing content, expanding the range of what is deemed normal concern. This is what pregnancy and motherhood are like in 2025.

“Some posts are supportive, but others are extreme and troubling. I don’t want to relive that,” remarks 8-month-pregnant Cerel Mukoko. Mukoko primarily engages with this content on Facebook and Instagram but deleted TikTok after becoming overwhelmed. “My eldest son is 4 years old, and during my pregnancy, I stumbled upon upsetting posts. They hit closer to home, and it seems to be spiraling out of control.” She adds that the disturbing graphics in this content are growing increasingly hard to cope with.

As a 35-year-old woman of color, Mukoko noticed specific portrayals of pregnant Black women in this content. A 2024 analysis of NHS data indicated that Black women faced up to six times the rate of severe complications compared to their white counterparts during childbirth. “This wasn’t my direct experience, but it certainly raises questions about my treatment and makes me feel more vigilant during appointments,” she states.

“They truly instill fear in us,” she observes. “You start to wonder: ‘Could this happen to me? Am I part of that unfortunate statistic?’ Given the complications I’ve experienced during this pregnancy, those intrusive thoughts can be quite consuming.”

For Dr. Alice Ashcroft, a 29-year-old researcher and consultant analyzing the impacts of identity, gender, language, and technology, this phenomenon began when she was expecting. “Seeing my pregnancy announcement was difficult.”

This onslaught didn’t cease once she was pregnant. “By the end of my pregnancy, around 36 weeks, I was facing stressful scans. I began noticing links shared by my midwife. I was fully aware that the cookies I’d created (my digital footprint) influenced this feed, which swayed towards apocalyptic themes and severe issues. Now with a 6-month-old, her experience continues to haunt her.

The ability of these algorithms to hone in on our most intimate fears is both unsettling and cruel. “For years, I’ve been convinced that social media reads my mind,” says 36-year-old Jade Asha, who welcomed her second child in January. “For me, it was primarily about body image. I’d see posts of women who were still gym-ready during their 9th month, which made me feel inadequate.”

Navigating motherhood has brought its own set of anxieties for Asha. “My feed is filled with posts stating that breastfeeding is the only valid option, and the comment sections are overloaded with opinions presented as facts.”

Dr. Christina Inge, a Harvard researcher specializing in tech ethics, isn’t surprised by these experiences. “Social media platforms are designed for engagement, and fear is a powerful motivator,” she observes. “Once the algorithm identifies someone who is pregnant or might be, it begins testing content similar to how it handles any user data.”


“For months after my pregnancy ended, my feed morphed into a new set of fears I could potentially face.” Photo: Christian Sinibaldi/Guardian

“This content is not a glitch; it’s about engagement, and engagement equals revenue,” Inge continues. “Fear-based content keeps users hooked, creating a sense of urgency to continue watching, even when it’s distressing. Despite the growing psychological toll, these platforms profit.”

The negative impact of social media on pregnant women has been a subject of extensive research. A systematic review examining social media use during pregnancy highlights both benefits and challenges. While it offers peer guidance and support, it also concludes that “issues such as misinformation, anxiety, and excessive use persist.” Dr. Nida Aftab, an obstetrician and the review’s author, emphasizes the critical role healthcare professionals should play in guiding women towards healthier digital habits.

Pregnant women may not only be uniquely vulnerable social media consumers, but studies show they often spend significantly more time online. A research article published in midwife last year indicated a marked increase in social media use during pregnancy, particularly peaking around week 20. Moreover, 10.5% of participants reported experiencing symptoms of social media addiction, as defined by the Bergen Social Media Addiction Scale.

In the broader context, Inge proposes several improvements. A redesigned approach could push platforms to feature positive, evidence-based content in sensitive areas like pregnancy and health. Increased transparency regarding what users are viewing (with options to adjust their feeds) could help minimize harm while empowering policymakers to establish stronger safeguards around sensitive subjects.

“It’s imperative users understand that feeds are algorithmic constructs rather than accurate portrayals of reality,” Inge asserts. “Pregnancy and early parent-child interactions should enjoy protective digital spaces, but they are frequently monetized and treated as discrete data points.”

For Ashcroft, resolving this dilemma is complex. “A primary challenge is that technological advancements are outpacing legislative measures,” she notes. “We wander into murky waters regarding responsibility. Ultimately, it may fall to governments to accurately regulate social media information, but that could come off as heavy-handed. While some platforms incorporate fact-checking through AI, these measures aren’t foolproof and may carry inherent biases.” She suggests using the “I’m not interested in this” feature may be beneficial, even if imperfect. “My foremost advice is to reduce social media consumption,” she concludes.

My baby arrived at the start of the year, and I finally had a moment to breathe as she emerged healthy. However, that relief was brief. In the months following my transition into motherhood, my feed shifted yet again, introducing new fears. Each time I logged onto Instagram, the suggested reels displayed titles like: Another baby falls victim to danger, accompanied by the text “This is not safe.” Soon after, there was a clip featuring a toddler with a LEGO in their mouth and a caption reading, “This could happen to your child if you don’t know how to respond.”

Will this content ultimately make me a superior, well-informed parent? Some might argue yes. But at what cost? Recent online safety legislation emphasizes the necessity for social responsibility to protect vulnerable populations in their online journeys. Yet, as long as the ceaseless threat of misfortune, despair, and misinformation assails the screens of new and expecting mothers, social media firms will profit from perpetuating fear while we continue to falter.

Do you have any thoughts on the issues raised in this article? If you would like to submit a response of up to 300 words for consideration in our Letters section, please click here.

Source: www.theguardian.com

The Ubiquity of Steroids on Social Media: Understanding the Risks

South_agency/Getty Images

If you’ve recently browsed fitness content on platforms like Instagram, Facebook, or TikTok, you might have encountered influencers who have used steroids. A recent global meta-analysis suggests that steroid usage among gym-goers varies from 6% to a shocking 29% across different countries.

This statistic might come as a shock. According to Timothy Piatkovski from Griffith University, the landscape of steroid use has evolved over the last decade. Many fitness influencers now present themselves as knowledgeable figures, openly discussing their drug use and advising followers on steroid usage.

“Regrettably, the level of medical knowledge and judgment varies significantly among these influencers,” states Piatkowski.

Influencers’ perceptions of health risks differ greatly, he observes. While some acknowledge the dangers of steroid use, asserting that risks can be managed sensibly, others are more reckless, promoting drugs like trenbolone, which is typically used to prevent muscle wastage in livestock, branding themselves as “Trenfluencers.”

Millions may question whether these substances are actually safe, or if influencers are leading them into perilous situations. What is the truth regarding the dangers associated with steroids? Is there a safer way to use them?

Piatkowski notes that research on the long-term health consequences of steroids is sparse. This is largely due to the mismatch between doses and usage patterns studied by researchers and those employed by actual users. He and his colleagues seek to bridge this gap by collaborating closely with steroid users to create more relevant and realistic studies.

However, this mismatch has already led to some influencers losing faith in mainstream scientific and medical perspectives, prompting users to seek advice from fitness and bodybuilding forums instead. These social media channels have become a major contributor to both the support network and the marketplace in the surge of steroid usage.

Users now have quick access to a range of substances that can be obtained illicitly. This includes oral anabolic steroids known as SARMs (selective androgen receptor modulators) and synthetic human growth hormone, naturally produced by the pituitary gland during adolescence. Collectively, they improve physique and performance, but their mechanisms can vary significantly.

One of the most prevalent substances is anabolic steroids, potent synthetic derivatives of testosterone. A 2022 study estimated that around half a million men and boys in the UK used them for non-medical purposes in the previous year.

Understanding Steroids

To determine whether steroids are safe, one must first grasp their effects on the body. Anabolic androgen steroids work by interacting with hormonal receptors that promote male sexual traits, particularly in muscle and bone tissues. “They aid in muscle growth and are vital for bone development; they guide boys through puberty and literally transform them into men,” explains Channa Jayasena from Imperial College London.

The desired result is evident: a bigger, stronger physique in a shorter timeframe. Medically, some of these substances are prescribed to treat conditions like muscle wasting associated with HIV. At lower doses, investigations suggest that steroids can be well tolerated. However, this is not a strategy commonly employed outside clinical settings.

Non-medical steroid use rarely mimics regulated clinical trials. Many users resort to “stacking” various drugs and alternate between cycles to allow bodily recovery, adopting practices like the “blast and cruise” regimen. Although these methods lack comprehensive scientific scrutiny, influencers often tout them as ways to minimize health risks or achieve effective muscle growth. This could explain why many users turn to influencers and online forums instead of healthcare professionals for advice.

The Risks of Unregulated Use

The temptation to test various drug combinations or follow cycling protocols stems from the belief that such strategies mitigate the adverse effects of anabolic steroids. The best-documented side effect is cardiovascular complications. Anabolic steroids are known to lower levels of high-density lipoproteins, or “good” cholesterol, while raising blood pressure and increasing low-density lipoproteins, known as “bad” cholesterol. This can thicken the heart muscle, potentially leading to cardiomyopathy—severe heart dysfunction and a lethal condition, as noted by Jayasena.

A Danish population study revealed that anabolic steroid users were three times more likely to die than other males during the study’s duration. “It’s akin to cocaine,” asserts Jayasena. Cardiovascular disease and cancer emerged as the most prevalent natural causes of death among these individuals.

Increased risk of heart disease and stroke is a well-known consequence of prolonged anabolic steroid use

3dmedisphere/shutterstock

Beyond cardiovascular matters, Jayasena highlights that the psychosocial implications of steroid use are significant and well-documented. The term “Roid Rage” encompasses various mental issues including aggression, mania, and mental illness—particularly among individuals consuming high doses. “When observing why steroid users have fatal outcomes, one notes three primary causes: cardiomyopathy, suicide, and aggression,” he notes, suggesting a possible correlation between steroid use and heightened tendencies toward criminal behavior.

This relationship remains contentious, as it’s challenging to differentiate the effects of steroid use from other contributing factors like recreational drug usage or pre-existing mental health issues. Nonetheless, it indicates that discontinuing steroid use may precipitate depression and suicidal thoughts. “The mind becomes lethargic,” explains Jayasena. “The recovery period can extend over months, sometimes even years.”

Research led by Jayasena revealed that nearly 30% of men who ceased steroid use experienced suicidal thoughts and major depression, possibly due to lingering steroid residues in brain areas responsible for emotional regulation. Additional studies indicate that steroids can impair kidney function and elevate cancer risks, although the data is less conclusive and heavily reliant on isolated medical case reports.

Several investigations have demonstrated that some of these health concerns might be reversible. For instance, the liver appears adept at self-repair and can manage lower clinical doses of certain steroids. Conversely, effects like high cholesterol and hypertension can be reversible post-steroid cessation; in contrast, others may require long-term or costly interventions to address, such as mood disorders and infertility.

The most severe repercussions of steroid use tend to be the hardest to treat. Structural alterations in the heart, along with research indicating lasting blood flow impairments to vital organs, is a concern that may linger long after users cease taking steroids.

Seeking “Safer” Steroids

Given the extensive and complex list of potential harms, many users experiment with steroid protocols aimed at risk reduction. This includes altering doses, timing, or combining them with other substances. However, there is a dearth of research examining the safety of these “protocols,” asserts Piatkowski.

One of Jayasena’s studies indicated that post-cycle therapy, where users take medications to restore natural testosterone production following steroid cycles, significantly lowered the risk of suicidal thoughts. Piatkowski’s research compares high-dose cycles and gradual tapering, identifying that those following a Blast Cruise approach reported fewer adverse health effects once they stopped using.

High-quality, controlled studies evaluating the effects of recreational steroid use are sparse, often characterized by small sample sizes or case reports that complicate the establishment of causal relationships. The evidence supporting specific protocols is also thin, particularly as patterns of steroid use evolve more rapidly than researchers can track.

Anabolic steroids are commonly injected into the subcutaneous fat layer located between the skin and muscle.

ole_cnx/istockphoto/getty images

“Further longitudinal and cohort studies are essential,” Piatkowski asserts. Such studies track individuals’ health and wellbeing over time, ultimately clarifying real risks and potentially providing strategies for risk mitigation. Nevertheless, in the absence of robust evidence, healthcare providers often struggle to offer guidance to steroid users.

Greg James, a clinician at Kratos Medical in Cardiff, UK, mentions that he provides private health and blood testing services. Some patients even inquire about combining steroids with GLP-1 drugs that suppress appetite, as well as other peptides that regulate hunger. “They ask me if these peptides are safe,” James notes. “And I respond that I cannot confirm their safety due to the lack of long-term data.”

Researchers like Piatkowski are beginning to directly engage with users in realistic settings, navigating the challenges posed by inadequate clinical data and rapidly changing user behaviors. Rather than viewing users as patients or outliers, this method considers them as valuable sources of real-life experience, contributing to the development of more relevant and realistic research.

A recent study conducted by Piatkowski and collaborators examined steroid samples from users, revealing that over 20% were contaminated with toxic substances such as lead, arsenic, and mercury. More than half were incorrectly administered, meaning users may have been taking far more potent agents intended for livestock use.

Another study involving interviews with diverse steroid users identified trenbolone as having the most negative consequences, particularly for psychological and social health. This suggests that focusing on trenbolone as a distinct harmful substance, along with targeted screening and intervention strategies, could be more effective for harm reduction compared to broad-ranging methods.

Fitness influencers are frequently regarded as authorities who provide guidance on anabolic steroid use to their followers.

Kritchanut Onmang/Alamy

This open and collaborative methodology in drug research mirrors approaches seen in other recreational drug strategies, like psychedelic research. By engaging with real users, insight can be gained not only into harm reduction techniques but also previously unrecognized medicinal applications.

They may also collaborate with influencers and users to promote safer behaviors, rather than outright condemning drug usage, Piatkowski emphasizes. “Enhancing knowledge within these communities and legalizing information is crucial. It’s an ongoing experimental endeavor. The more we stimulate this discussion, the more we can advance the field.”

Topic:

Source: www.newscientist.com

Is Australia’s Social Media Ban Effective in Keeping Teens Safe Online?

Regulated access to social media in Australia

Anna Barclay/Getty Images

In a few months, Australian teenagers may face restrictions on social media access until they turn 16.

As the December implementation date approaches, parents and children are left uncertain about how this ban will be enforced and how online platforms will verify users’ ages.

Experts are anticipating troubling outcomes, particularly since the technology used by social media companies to determine the age of users tends to have significant inaccuracies.

From December 10th, social media giants like Instagram, Facebook, X, Reddit, YouTube, Snapchat, and TikTok are required to remove or deactivate any accounts for users under 16 in Australia. Failing to comply could result in fines reaching up to $49.5 million (around $32 million USD), while parents will not face penalties.

Prior to the announcement of the ban, the Australian government initiated a trial on age verification technology, which released preliminary findings for June, with a comprehensive report expected soon. This study aimed to test an age verification tool on over 1,100 students across the country, including indigenous and ethnically diverse groups.

Andrew Hammond from KJR, the consulting firm based in Canberra that led the trial, shared an anecdote illustrating the challenge at hand. One 16-year-old boy’s age was inaccurately guessed to be between 19 and 37.

“He scrunched up his face and held his breath, turning red and puffy like an angry older man,” he said. “He didn’t do anything wrong; we wanted to see how our youth would navigate these systems.”

Other technologies have also been evaluated with Australian youth, such as hand gesture analysis. “You can estimate someone’s age broadly based on their hand appearance,” Hammond explains. “While some children felt uneasy using facial recognition, they were more comfortable with hand assessments.”

The interim report indicated that age verification could be safe and technically viable; previous headlines noted that while challenges exist, 85% of subjects’ ages could be accurately estimated within an 18-month range. If a person initially verified as being over 16 is later identified as under that age, they must undergo more rigorous verification processes, including checks against government-issued IDs or parental verification.

Hammond noted that some underage users can still be detected through social media algorithms. “If you’re 16 but engage heavily with 11-year-old party content, it raises flags that the social media platform should consider, prompting further ID checks.”

Iain Corby from the London Association of Age Verification Providers, which supported the Australian trial, pointed out that no single solution exists for age verification.

The UK recently mandated age verification on sites hosting “harmful content,” including adult material. Since the regulations went into effect on July 25th, around 5 million users have been verifying their ages daily, according to Corby.

“In the UK, the requirement is for effective but not foolproof age verification,” Corby stated. “There’s a perception that technology will never be perfect, and achieving higher accuracy often requires more cumbersome processes for adults.”

Critics have raised concerns about a significant loophole: children in Australia could use virtual private networks (VPNs) to bypass the ban by simulating locations in other nations.

Corby emphasized that social media platforms should monitor traffic from VPNs and assess user behavior to identify potential Australian minors. “There are many indicators that someone might not be in Thailand, confirming they could be in Perth,” he remarked.

Apart from how age verification will function, is this ban on social media the right approach to safeguarding teenagers from online threats? The Australian government asserted that significant measures have been implemented to protect children under 16 from the dangers associated with social media, such as exposure to inappropriate content and excessive screen time. The government believes that delaying social media access provides children with the opportunity to learn about these risks.

Various organizations and advocates aren’t fully convinced. “Social media has beneficial aspects, including educational opportunities and staying connected with friends. It’s crucial to enhance platform safety rather than impose bans that may discourage youth voices,” stated UNICEF Australia on its website.

Susan McLean, a leading cybersecurity expert in Australia, argues that the government should concentrate on harmful content and the algorithms that promote such material to children, expressing concern that AI and gaming platforms have been exempted from this ban.

“What troubles me is the emphasis on social media platforms, particularly those driven by algorithms,” she noted. “What about young people encountering harmful content on gaming platforms? Have they been overlooked in this policy?”

Lisa Given from RMIT University in Melbourne explained that the ban fails to tackle issues like online harassment and access to inappropriate content. “Parents may have a false sense of security thinking this ban fully protects their children,” she cautioned.

The rapid evolution of technology means that new platforms and tools can pose risks unless the underlying issues surrounding harmful content are addressed, she argued. “Are we caught in a cycle where new technologies arise and prompt another ban or legal adjustment?” Additionally, there are concerns that young users may be cut off from beneficial online communities and vital information.

The impact of the ban will be closely scrutinized post-implementation, with the government planning to evaluate its effects in two years. Results will be monitored by other nations interested in how these policies influence youth mental health.

“Australia is presenting the world with a unique opportunity for a controlled experiment,” stated Corby. “This is a genuine scientific inquiry that is rare to find.”

Topics:

Source: www.newscientist.com

Social Media Continues to Promote Suicide-Related Content to Teens Despite New UK Safety Regulations

Social media platforms continue to disseminate content related to depression, suicide, and self-harm among teenagers, despite the introduction of new online safety regulations designed to safeguard children.

The Molly Rose Foundation created a fake account pretending to be a 15-year-old girl and interacted with posts concerning suicide, self-harm, and depression. This led to the algorithm promoting accounts filled with a “tsunami of harmful content on Instagram reels and TikTok pages,” as detailed in the charity’s analysis.

An alarming 97% of recommended videos viewed on Instagram reels and 96% on TikTok were found to be harmful. Furthermore, over half (55%) of TikTok’s harmful recommended posts included references to suicide and self-harm, while 16% contained protective references to users.

These harmful posts garnered substantial viewership. One particularly damaging video was liked over 1 million times on TikTok’s For You Page, and on Instagram reels, one in five harmful recommended videos received over 250,000 likes.

Andy Burrows, CEO of The Molly Rose Foundation, stated: “Persistent algorithms continue to bombard teenagers with dangerous levels of harmful content. This is occurring on a massive scale on the most popular platforms among young users.”

“In the two years since our last study, it is shocking that the magnitude of harm has not been adequately addressed, and that risks have been actively exacerbated on TikTok.

“The measures instituted by Ofcom to mitigate algorithmic harms are, at best, temporary solutions and are insufficient to prevent preventable damage. It is crucial for governments and regulators to take decisive action to implement stronger regulations that platforms cannot overlook.”

Researchers examining platform content from November 2024 to March 2025 discovered that while both platforms permitted teenagers to provide negative feedback on content, as required by Ofcom under the online safety law, this function also allowed for positive feedback on the same material.

The Foundation’s Report, developed in conjunction with Bright Data, indicates that while the platform has made strides to complicate the use of hashtags for searching hazardous content, it still amplifies harmful material through personalized AI recommendation systems once monitored. The report further observed that platforms often utilize overly broad definitions of harm.

This study provided evidence linking exposure to harmful online content with increased risks of suicide and self-harm.

Additionally, it was found that social media platforms profited from advertisements placed next to numerous harmful posts, including those from fashion and fast food brands popular among teenagers as well as UK universities.


Ofcom has initiated the implementation of child safety codes in accordance with online safety laws aimed at “taming toxic algorithms.” The Molly Rose Foundation, which receives funding from META, expresses concern that regulators propose a mere £80,000 for these improvements.

A spokesperson for Ofcom stated, “Changes are underway. Since this study was conducted, new measures have been introduced to enhance online safety for children. These will make a significant difference, helping to prevent exposure to the most harmful content, including materials related to suicide and self-harm.”

Technology Secretary Peter Kyle mentioned that 45 sites have been under investigation since the enactment of the online safety law. “Ofcom is also exploring ways to strengthen existing measures, such as employing proactive technologies to protect children from self-harm and recommending that platforms enhance their algorithmic safety,” he added.

A TikTok spokesperson commented: “TikTok accounts for teenagers come equipped with over 50 safety features and settings that allow for self-expression, discovery, and learning while ensuring safety. Parents can further customize content and privacy settings for their teens through family pairing.”

A Meta spokesperson stated: “I dispute the claims made in this report, citing its limited methodology.

“Millions of teenagers currently use Instagram’s teenage accounts, which offer built-in protections that limit who can contact them, the content they can see, and their time spent on Instagram. Our efforts to utilize automated technology continue in order to remove content that promotes suicide and self-harm.”

Source: www.theguardian.com

Addressing Social Media Toxicity: Algorithms Alone Won’t Solve the Problem

Can I address the issue of social media?

MoiraM/Alamy

The impact of social media polarization transcends mere algorithms. Research conducted with AI-generated users reveals that this stems from fundamental aspects of the platform’s operation. It indicates that genuine solutions will require a re-evaluation of online communication frameworks.

Petter Törnberg from the University of Amsterdam and his team created 500 AI chatbots reflecting a diverse range of political opinions in the United States, based on the National Election Survey. Utilizing the GPT-4o Mini Large Languages Model, these bots were programmed to engage with one another on simplified social networks without commercial influences or algorithms.

Throughout five rounds of experiments, each consisting of 10,000 actions, the AI agents predominantly interacted with like-minded individuals. Those with more extreme views garnered greater followership and reposts, increasing visibility for users attracted to more partisan content.

In prior research, Törnberg and his colleagues explored whether different algorithmic approaches in simulated social networks could mitigate political polarization. However, the new findings appear to challenge earlier conclusions.

“We expected this polarization to be largely driven by algorithms,” Törnberg states. “[We thought] the platform is geared towards maximizing engagement and inciting outrage, thus producing these outcomes.”

Instead, they found that the algorithm itself isn’t the primary culprit. “We created the simplest platform imaginable, and yet we saw these results immediately,” he explains. “This suggests that there are deeply ingrained behaviors linked to following, reposting, and engagement that are at play.”

To see if these ingrained behaviors could be moderated or counteracted, the researchers tested six potential interventions. These included time series display only, diminishing the visibility of viral content, concealing opposing viewpoints, amplifying sympathetic and rational content, hiding follower and repost counts, and obscuring profile bios.

Most interventions yielded minimal effects. Cross-partisan engagement shifted only by about 6% or less, while the prominence of top accounts changed by 2-6%, but some modifications, like concealing bios, worsened polarization. While some changes that reduced user inequality made extreme posts more attractive, alterations aimed at softening partisanship inadvertently drew more attention to a small group of elite users.

“Most activities on social media devolve into toxic interactions. The root issues with social media stem from its foundational design, which can accentuate negative human behavior,” states Jess Maddox of the University of Georgia.

Törnberg recognizes that while this experiment simplifies various dynamics, it provides insights into what social platforms can do to curb polarization. “Fundamental changes may be necessary,” he cautions. “Tweaking algorithms and adjusting parameters might not be sufficient; we may need to fundamentally rethink interaction structures and how these platforms shape our political landscapes.”

Topic:

Source: www.newscientist.com

Palestinian Social Media Accounts Seeking Funds Flagged as Spam | Technology

Hanin Al-Batsh estimates that over the past six months, he has created more than 80 accounts on Bluesky.

Like many other Palestinians struggling to secure food in Gaza, Albatos hopes that Blue skiing will help her raise enough funds for flour and milk for her children as part of her crowdfunding efforts.

She shared that posting to text-based social networks has become even more critical as Israel tightens its hold on Gaza, leading to widespread starvation.

“Hello, my kids are getting weaker, losing weight, and suffering from malnutrition and low iron levels,” said Al-Batsh in her most recent post.


Images shared with The Guardian by the young mother reveal her two sons, Ahmed, aged 1.5, and Adam, who is three, lying on a makeshift bed on the floor of the warehouse where they are taking shelter.

As hunger proliferates across Gaza and aid remains scarce, Palestinians are increasingly looking to crowdfunding platforms like Gofundme and Chuffed as their lifelines.

However, their attempts to promote their campaigns on social media often result in their accounts being shut down or flagged as spam, particularly on Bluesky, the emerging alternative to Twitter in Gaza.

According to her, Bluesky deactivated almost all of Al-Batsh’s accounts just days later, with the longest one remaining active for only 12 days.

When a social network such as Bluesky flags an account as spam, she feels compelled to establish a new account, reassuring potential donors that she is not a bot.




View of North Gaza from Jordanian aid aircraft on August 5, 2025. Photo: Alessio Mamo/The Guardian

To combat bots and fraud, the shutdowns ironically compel Blueski users to rely on tagging the same individuals who previously engaged with them in an attempt to counter bots and fraud.

Although Al-Batsh refrains from tagging individuals in every post after receiving strict instructions from Bluesky, she expresses frustration, stating, “Now no one can find my posts.”

Desperation drives many Palestinians to act like bots. With new accounts, it becomes increasingly challenging for individuals like Al-Batsh to refute accusations of being automated accounts; fewer followers and repetitive tagging can trigger suspicions.

Nevertheless, grassroots responses to the issue have emerged. Since May, Al-Batsh has started marking her posts with a green checkmark emoji and the phrase “verified by Molly Shah.”

A small group of volunteers assist her with similar tasks on Bluesky. Comparable initiatives are taking place across various social media platforms, with some run by larger teams of volunteers. X and Instagram have Gazafunds and Radio Watermelon, while Tumblr has Gaza Vetters.

Despite this, Shah expresses her desire for a more structured system, stating, “This is too much focus on me.”

Guerrilla Verification Network

Shah has been involved with Blueski since its early days.

Thus began her verification project. She encouraged her friend Jamal to set up a Bluesky account to share posts for her campaign, hoping to draw attention to it in 2023. Jamal managed to raise enough money to leave Gaza.




Palestinians gather at the Jikim intersection to receive limited flour and basic food aid as hunger intensifies due to the ongoing Israeli blockade in North Gaza on August 7, 2025. Photo: Mahmoud Issa/Anadolu via Getty Images

Shah’s verification project gained momentum as more individuals from Gaza joined the social network. Many reached out to her, hoping that she would share their campaigns with her substantial follower base of 57,000. She began vetting the individuals and families behind each campaign before sharing their information, paving the way for her guerrilla verification network.

Today, Shah maintains spreadsheets for over 300 accounts that she has verified. They use the same authentication badge as Al-Batsh, labeling their posts and profile pages with “Validated by Molly Shah.” While this stamp does not entirely prevent Bluesky’s system from flagging accounts as spam, she hopes it helps reassure users that the account owner is genuine.

“The validation appears to help people recognize that these are real individuals,” Shah stated. “My main goal is not to fundraise; it’s about combatting the ongoing and systemic dehumanization of Palestinians.”


Shah mentioned that the review process is not standardized and allows video calls. She accepts documents from people she has already vetted or knows personally to validate their identity and confirm their presence in Gaza. This process is time-consuming; Al-Batsh reported waiting two months for a response from Shah. Occasionally, Shah encounters individuals who falsely claim to be from Gaza or misrepresent their circumstances, but most are genuine people seeking assistance.

Crucial Fraud Prevention

According to aid and human rights organizations, Gaza is facing unprecedented levels of hunger, increasing the stakes for fundraising campaigns and amplifying the importance of every Bluesky post. Duaa al-Madoon, another mother in Gaza, recently shared her struggles to feed her three children and mentioned that she also deleted her Bluesky account. The cost of flour, milk, and diapers can reach $100 daily when available; recently, she has found it challenging to locate diapers and milk, going days without eating to ensure her children are fed.

“My child has no proper diaper, causing severe rashes. Food is scarce and exorbitantly priced. If you manage to get something, it’s mainly rice,” lamented Al-Madoon.

Skip past newsletter promotions

According to Nat Calhoun, who has supported several families in Gaza through a campaign, the impact of fundraising can be immediate. In one case, a family contacted them about an elderly woman in Mawasi who had not eaten for several days. They were able to raise $110 to supply her with flour and sent her the funds the next day.

“It can be instantaneous,” Calhoun noted. “I don’t think people realize how much their support can genuinely impact someone’s day.”

To receive funds raised through campaigns, Palestinians must collaborate with ‘recipients.’ Individuals outside Gaza initiate campaigns, collect funds on their behalf, and transfer money through banks because the payment processor used by the platform does not operate in Gaza.

This system necessitates that Palestinians place substantial trust in these intermediaries, individuals they have never met.

Consequently, campaigns and the Palestinians they aim to assist are vulnerable to fraud.




Amira Mutea reflected on her struggle with malnutrition in Gaza on August 5, 2025. Photo: Mahmoud Issa/Reuters

Calhoun and Shah noted that much of the fraud they encounter exploits vulnerable Palestinians.

Al-Batsh’s initial campaign on GoFundMe was organized by a woman who claimed to be located in Tucson, Arizona. The campaign raised almost $37,000, but Al-Batsh only received about $34,000 before the campaign organizer faced issues accessing her account. “I have never received the remaining funds,” Al-Batsh lamented.

“The thought of it is maddening,” said Calhoun. “Because the people of Gaza cannot fundraise independently. They are at the mercy of others and must trust that those people will treat them fairly.”

Requesting Changes from Bluesky

Bluesky’s spam filters often obstruct donations. Ad hoc verification systems like Shah’s provide a level of assurance that the funds donated are directed to legitimate individuals in Gaza rather than fraudulent entities.

When Shah shares a campaign, the difference is noticeable. Al-Batsh’s campaign garnered 10 donations ranging from $5 to $505 within just two days of her sharing it, compared to an average of two or three donations per day prior.

Although her validation network has helped some Palestinians maintain their online presence, Shah admits that it is not a sustainable solution. Overwhelmed by requests, she has limited her sharing to one account daily.

Meanwhile, thousands of Bluesky users have signed open letters urging the platform to enhance its moderation practices.

“We understand that when posting a fundraising link, Gazans may trigger Bluesky’s automated spam filters,” states an open letter signed by 7,000 individuals. “However, just as the platform addresses spam from T-shirt bots, failing to accommodate a vulnerable group is not only cruel but exacerbates their struggles for survival.”




Israeli activists protest in Tel Aviv against the bombing, starvation, and forced evacuation of Palestinians in the Gaza Strip. Photo: Ariel Shalit/AP

Bluesky stated in response to the open letter that it is committed to ensuring that the voices of Gaza residents are heard on its platform. However, they noted that certain account activities violated community guidelines and urged users to focus their efforts through verified accounts.

Bluesky has not responded to requests for comment.

“We acknowledge that we may not always make the right moderation decisions, which is why we have an appeals process,” the statement continued. However, Shah and others advocating for Gaza residents say very few receive responses when filing appeals, making it challenging for Palestinians to maintain account access beyond a brief period.

Shah noted that Bluesky had an opportunity to improve its moderation systems in the early days of the conflict in Gaza when fewer users were on the platform. She hopes they seized that opportunity.

“It seems that Bluesky is saying, ‘we’re eliminating spammers,’ but it’s the very people we are striving to protect who are being targeted,” she concluded.

Source: www.theguardian.com

Arts and Media Groups Call for AI Training to Combat “Ramp Theft” of Australian Content

Arts, creative, and media organizations are urging the government to prohibit large tech companies from using Australian content and developing artificial intelligence models. There is growing concern that such a decision may “betray” Australian workers and facilitate the “widespread theft” of intellectual property.

The Albanese government has stated that it has no intention of altering copyright laws, but emphasizes that any changes must consider their effects on artists and news media. Opposition leader Sassan Ray has called for compensation for any use of copyrighted material.

“It is unacceptable for Big Tech to exploit the work of Australian artists, musicians, creators, and journalists without just compensation,” Ray asserted on Wednesday.


The Productivity Committee’s interim report titled “Utilizing Data and Digital Technology” proposes regulations for technologies, including AI in Australia, projecting a productivity increase of 0.5% to 13% over the next decade, thereby potentially adding $116 billion to the nation’s GDP.

The report highlighted that building AI models demands a substantial amount of data, prompting concerns from many players, including Creative Australia and copyright agencies, about the misuse of copyrighted content for AI training.

The committee outlined potential solutions, advocating for an expansion of licensing agreements, exemptions for “text and data mining,” and enhancements to existing fair trading regulations that are already in place in other countries.

This latter suggestion faced significant opposition from arts, creative, and media organizations. They expressed discontent at the idea of allowing wealthy tech companies to utilize their work for AI training without appropriate compensation.

Such a shift could jeopardize existing licensing agreements formed between publishers and creators with major tech firms and complicate negotiations for news media seeking fair compensation from social media platforms for journalism online.

Sign up: AU Breaking NewsEmail

The Australian Labour Union Council (ACTU) criticized the Productivity Committee’s proposal, claiming it exploits the interests of large multimillion-dollar corporations, warning that it may mislead efforts to assist Australian workers.

“The extensive discussion surrounding text and data mining exemptions risks normalizing the theft of creative works from Australian artists and Indigenous communities,” said ACTU.

Joseph Mitchell, ACTU Secretary, indicated that such exemptions would allow “high-tech corporations to reap the full benefits of advanced technology without giving back to the creators.”

APRA Chair Jenny Morris is among those who have voiced concerns over potential exemptions for “text and data mining” used in AI training. Photo: AAP

Australia’s music rights organizations, Apra Amcos and the National Aboriginal and Torres Strait Islander Music Bureau, expressed disappointment regarding the committee’s recommendations, raising alarms about the implications for Australia’s $9 billion music sector.

APRA Chair Jenny Morris stressed that this recommendation highlights a recognition that these practices are already widespread.

Attorney General Michelle Roland, responsible for copyright legislation, stated that any advancements in AI must prioritize building trust and confidence.

“Any reforms to Australia’s copyright law must reflect the effects on the nation’s creative and news sectors. We remain dedicated to participating in dialogues around these issues, particularly with the copyright and AI reference groups initiated by the government last year,” she mentioned.

Skip past newsletter promotions

When asked about the committee’s findings, Ray expressed concern regarding the absence of sufficient “guardrails” from the government to tackle AI-related issues.

“We need to safeguard content creators… their work rightfully belongs to them, and we must not take it without compensating them,” she added.

Ed Fushik, former Minister for Industry and Technology for Workers, defended the overall outlook for the economy on Wednesday. Treasurer Jim Chalmers later commented on ABC’s 7.30, saying, “The mechanism you deploy, whether one act or multiple existing acts… is not the crux of the issue.”

“I believe we can strike a balance between concerns that AI is harmful and those who pretend we can return to a previous state,” he indicated.

“There are no current plans to undermine or alter Australia’s copyright arrangements.”

Arts Minister Tony Burke highlighted a submission from Creative Australia regarding the review. He stated that, “It emphasizes the necessity for consent, transparency, and fair compensation concerning copyright and labeling.”

In a statement, Creative Australia asserted that the nation has the potential to lead globally in establishing “fair standards” for AI application.

“Artists and creatives whose work is utilized in training AI are entitled to proper compensation,” a spokesperson remarked.

“Innovation should not come at the cost of ethical business practices.”

The Australian Publishers Association (APA) has expressed worries about the possibility of works being utilized without authorization or compensation.

“While we support responsible innovation, this draft proposal favors infringers over investors,” stated Patrizia Di Biase-Dyson, CEO of APA.

“We oppose the idea that Australian narratives and educational materials integral to our culture and democracy should be treated as free resources for corporate AI systems.”

The copyright agency likewise spoke against the text and data mining exemption, emphasizing that it would adversely affect creators’ revenue.

“The movement towards revision of the Australian copyright system stems from large multinational corporations, and it does not serve the national interest,” remarked CEO Josephine Johnston. “To empower Australia’s high-quality content in the new AI era, it’s critical that creators receive fair compensation.”

Source: www.theguardian.com

Transatlantic Social Media Clash: Impact of UK Online Safety Laws on Internet Safety

The UK’s new online safety laws are generating considerable attention. As worries intensify about the accessibility of harmful online content, regulations have been instituted to hold social media platforms accountable.

However, just days after their implementation, novel strategies for ensuring children’s safety online have sparked discussions in both the UK and the US.

Recently, Nigel Farage, leader of the Populist Reformed British Party, found himself in a heated exchange with the government’s Minister of Labour after announcing his intent to repeal the law.

In parallel, Republicans convened with British lawmakers and the communications regulator Ofcom. The ramifications of the new law are also keenly observed in Australia, where plans are afoot to prohibit social media usage for those under 16.

Experts note that the law embodies a tension between swiftly eliminating harmful content and preserving freedom of speech.

Senior Reformer Zia Yusuf stated:

Responding to criticisms of UK legislation, technical secretary Peter Kyle remarked, “If individuals like Jimmy Saville were alive today, they would still commit crimes online, and Nigel Farage claims to be on their side.”

Kyle referred to measures in the law that would help shield children from grooming via messaging apps. Farage condemned the technical secretary’s comments as “unpleasant” and demanded an apology, which is unlikely to be forthcoming.

“It’s below the belt to suggest they’ll do anything to assist individuals like Jimmy Saville while causing harm,” Farage added.

The UK’s rights are not the only concerns raised about the law. US Vice President JD Vance claimed that freedom of speech in the UK is “retreating.” Last week, Republican Rep. Jim Jordan, who criticized the legislation, led a group of US lawmakers in discussions with Kyle and Ofcom regarding the law.

Jordan labeled the law as “UK online censorship legislation” and criticized Ofcom for imposing regulations that “target” and “harass” American companies. A bipartisan delegation also visited Brussels to explore the Digital Services Act, the EU’s counterpart to the online safety law.

Scott Fitzgerald, a Republican member of the delegation, noted the White House would be keen to hear the group’s findings.

Worries from the Trump administration have even led to threats against OFCOM and EU personnel concerning visa restrictions. In May, the State Department announced it would block entry to the US for “foreigners censoring Americans.” Ofcom has expressed a desire for “clarity” regarding planned visa restrictions.

The intersection of free speech concerns with economic interests is notable. Major tech platforms including Google, YouTube, Facebook, Instagram, WhatsApp, Snapchat, and X are all based in the US and may face fines of up to £18 million or 10% of global revenue for violations. For Meta, the parent company of Instagram, Facebook, and WhatsApp, this could result in fines reaching $16 billion (£11 billion).

On Friday, X, the social media platform owned by self-proclaimed free speech advocate Elon Musk, issued a statement opposing the law, warning that it could “seriously infringe” on free speech.

Signs of public backlash are evident in the UK. A petition calling for the law’s repeal has garnered over 480,000 signatures, making it eligible for consideration in Congress, and was shared on social media by far-right activist Tommy Robinson.

Tim Bale, a political professor at Queen Mary University in London, is skeptical about the law being a major voting issue.

“No petition or protest has significant traction for most people. While this resonates strongly with those online—on both the right and left—it won’t sway a large portion of the general populace,” he said.

According to a recent Ipsos Mori poll, three out of four UK parents are worried about their children’s online activities.

Beavan Kidron, a British fellow and prominent advocate for online child safety, shared with the Guardian that he is “more than willing to engage Nigel Farage and his colleagues on this issue.”

Skip past newsletter promotions

“If companies focus on targeting algorithms toward children, why would reforms place them in the hands of Big Tech?”

The UK’s new Under-18 guidelines, which prompted the latest legislation, mandate age verification on adult sites to prevent underage access. However, there are also measures to protect children from content that endorses suicide, self-harm, and eating disorders, as well as curtail the circulation of materials that incite hatred or promote harmful substances and dangerous challenges.

Some content falls within age appropriateness to avoid being flagged as violating these regulations. In an article by the Daily Telegraph, Farage alleged that footage of anti-immigrant protests was not only “censored” but also related to the Rotherham Grooming Gang scandal.

These instances were observed on X, which flagged a speech by Conservative MP Katie Lamb regarding the UK’s child grooming scandal. The content was labeled with a notice stating, “local laws temporarily restrict access to this content until X verifies the user’s age.” The Guardian could not access the Age Verification Service on X, suggesting that, until age checks are fully operational, the platform defaults many users to a child-friendly experience.

X was contacted for commentary regarding age checks.

On Reddit, the Alcohol Abuse Forum and the Pet Care subforum will implement age checks before granting access. A Reddit spokesperson confirmed that this age check is enforced under the online safety law to limit content that is illegal or harmful to users under the age of 18.

Big Brother Watch, an organization focused on civil liberties and privacy, noted that examples from Reddit and X exemplify the overreach of new legislation.

An Ofcom representative stated that the law aims to protect children from harmful and criminal content while simultaneously safeguarding free speech. “There is no necessity to limit legal content accessible to adult users.”

Mark Jones, a partner at London-based law firm Payne Hicks Beach, cautioned that social media platforms might overly censor legitimate content due to compliance concerns, jeopardizing their obligations to remove illegal material or content detrimental to children.

He added that the regulations surrounding Ofcom’s content handling are likely to manifest as actionable and enforceable due to the pressure to quickly address harmful content while respecting freedom of speech principles.

“To effectively curb the spread of harmful or illegal content, decisions must be made promptly; however, the urgency can lead to incorrect choices. Such is the reality we face.

The latest initiatives from the online safety law are only the beginning.

Source: www.theguardian.com

Enforcement of Australia’s Social Media Ban for Users Under 16: Which Platforms Are Exempt?

Australians engaging with various social media platforms like Facebook, Instagram, YouTube, Snapchat, X, and others should verify that they are over 16 years old ahead of the upcoming social media ban set to commence in early December.


Beginning December 10th, new regulations will come into effect for platforms defined by the government as “age-restricted social media platforms.” These platforms are intended primarily for social interactions involving two or more users, enabling users to share content on the service.

The government has not specified which platforms are included in the ban, implying that any site fitting the above criteria may be affected unless it qualifies for the exemptions announced on Wednesday.

Prime Minister Anthony Albanese noted that platforms covered by these rules include, but aren’t limited to, Facebook, Instagram, X, Snapchat, and YouTube.

Communications Minister Annika Wells indicated that platforms are anticipated to disable accounts for users under 16 and implement reasonable measures to prevent younger individuals from creating new accounts, verifying their age, and bypassing established restrictions.


What is an Exemption?

According to the government, a platform will be exempt if it serves a primary purpose other than social interaction.

  • Messaging, email, voice, or video calling.

  • Playing online games.

  • Sharing information about products or services.

  • Professional networking or development.

  • Education.

  • Health.

  • Communication between educational institutions and students or their families.

  • Facilitating communication between healthcare providers and their service users.

Determinations regarding which platforms meet the exemption criteria will be made by the eSafety Commissioner.

In practice, this suggests that platforms such as LinkedIn, WhatsApp, Roblox, and Coursera may qualify for exemptions if assessed accordingly. LinkedIn previously asserted that the government’s focus is not on children.


Hypothetically, platforms like YouTube Kids could be exempt from the ban if they satisfy the exemption criteria, particularly as comments are disabled on those videos. Nonetheless, the government has yet to provide confirmation, and YouTube has not indicated if it intends to seek exemptions for child-focused services.


What About Other Platforms?

Platforms not named by the government and that do not meet the exemption criteria should consider implementing age verification mechanisms by December. This includes services like Bluesky, Donald Trump’s Truth Social, Discord, and Twitch.


How Will Tech Companies Verify Users Are Over 16?

A common misunderstanding regarding the social media ban is that it solely pertains to children. To ensure that teenagers are kept from social media, platforms must verify the age of all user accounts in Australia.

There are no specific requirements for how verification should be conducted, but updates from the Age Assurance Technology Trial will provide guidance.

The government has mandated that identity checks can be one form of age verification but is not the only method accepted.

Australia is likely to adopt an approach for age verification comparable to that of the UK, initiated in July. This could include options such as:

  • Requiring users to be 18 years of age or older to allow banks and mobile providers access to their users.

  • Requesting users to upload a photo to match with their ID.

  • Employing facial age estimation techniques.

Moreover, platforms may estimate a user’s age based on account behavior or the age itself. For instance, if an individual registered on Facebook in 2009, they are now over 16. YouTube has also indicated plans to utilize artificial intelligence for age verification.


Will Kids Find Workarounds?

Albanese likened the social media ban to alcohol restrictions, acknowledging that while some children may circumvent the ban, he affirmed that it is still a worthwhile endeavor.

In the UK, where age verification requirements for accessing adult websites were implemented this week, there has been a spike in the use of virtual private networks (VPNs) that conceal users’ actual locations, granting access to blocked sites.

Four of the top five free apps in the UK Apple App Store on Thursday were VPN applications, with the most widely used one, Proton, reporting an 1,800% increase in downloads.


The Australian government expects platforms to implement “reasonable measures” to address how teenagers attempt to evade the ban.


What Happens If a Site Does Not Comply With the Ban?

Platforms failing to implement what eSafety members deem “reasonable measures” to prevent children from accessing their services may incur fines of up to $49.5 million, as determined in federal court.

The definition of “reasonable measures” will be assessed by committee members. When asked on Wednesday, Wells stated, “I believe a reasonable step is relative.”

“These guidelines are meant to work, and any mistakes should be rectified. They aren’t absolute settings or rules, but frameworks to guide the process globally.”


Source: www.theguardian.com

How Platforms Lobby for Exemptions Amid the Ban on Aussie Teen Social Media

The Australian government is rapidly identifying which social media platforms will face restrictions for users under 16.

Social Services Minister Tanya Plibersek stated on Monday that the government “will not be intimidated by the actions of social media giants.” Nevertheless, tech companies are vigorously advocating for exemptions from the law set to take effect in December.

Here’s what social media companies are doing to support their case:


The parent company of Facebook and Instagram has introduced new Instagram teen account settings to signal their commitment to teenage safety on the platform.

Recently, Meta revealed New protections, which aim to enhance direct message security by automatically censoring nude images and implementing blocking features.

Additionally, Meta hosted a “Screen Smart” safety event in Sydney targeted at “Parent Creators,” led by Sarah Harris.

Sign up: AU Breaking News Email

YouTube

YouTube’s approach is even more assertive. Last year, Communications Minister Michelle Roland suggested the platform would be exempt from social media restrictions.

However, last month, the Esafety Commissioner advised the government to reconsider this exemption, citing research indicating that children often encounter harmful materials on YouTube.

Since then, the company has escalated its lobbying efforts, including full-page advertisements claiming YouTube can be used by “everyone,” alongside a letter sent to Communications Minister Anica Wells warning of a potential high court challenge if YouTube is subjected to the ban.


YouTube advertisement campaign opposing social media restrictions set to commence in December. Photo: Michael Karendiane/Guardian

As reported by Guardian Australia last month, Google is hosting its annual showcase this week at the Capitol on Wednesday. There, content creators, including child musicians, who oppose the YouTube ban will likely express their views to politicians.

Last year’s event featured the Wiggles, who met with Roland. This meeting was mentioned in a letter sent to Rowland last year when YouTube’s global CEO Neal Mohan requested the exemption within 48 hours of the promised relief.

Guardian Australia reported last week that YouTube met with Wells this month for an in-person discussion regarding the ban.

TikTok


Screenshots from TikTok’s advertisements highlighting its benefits for teenagers. Photo: TikTok

This month, TikTok is running ads on its platform as well as on Meta channels, promoting educational benefits for teens on vertical video platforms.

Skip past newsletter promotions

“The 1.7m #fishtok video encourages outdoor activities in exchange for screen time,” the advertisement states, acknowledging the government’s assertion that the ban would promote time spent outside. “They are developing culinary skills through cooking videos that have garnered over 13m views,” it continues.

“A third of users visit the STEM feed weekly to foster learning,” another ad claims.

Snapchat


Screenshot of Snapchat’s educational video about signs of grooming featuring Lambros army. Photo: Snapchat

Snapchat emphasizes user safety. In May, Guardian Australia reported on an instance involving an 11-year-old girl who added random users as part of a competition with her friend for high scores on the app.

This month, Snapchat announced a partnership with the Australian Federal Police-led Australian Centre to address child exploitation through a series of educational videos shared by various Australian influencers, along with advertisements advising parents and teens on identifying grooming and sextortion.

“Ensuring safety within the Snapchat community has always been our top priority, and collaborating closely with law enforcement and safety experts is crucial to that effort,” stated Ryan Ferguson, Australia’s Managing Director at Snap.

The platform has also reiterated account settings for users aged 13-17, including default private accounts and chat warnings when communicating with individuals who lack shared friends or are absent from contact lists.

Thus far, the government seems unyielding.

“It is undeniable that young people’s mental health has been adversely affected due to social media engagement, prompting the government’s actions,” Prime Minister Anthony Albanese told ABC insiders on Sunday.

“I will meet again with individuals who have faced tragedy this week… one concern expressed by some social media companies is our leadership on this matter, and we take pride in effectively confronting these threats.”




Source: www.theguardian.com

Social Media is Over: What’s Next on the Horizon?

Matthias Oberholzer/Unsplash

One of the disheartening truths of the 21st century is that what we perceive as social media is essentially just mass media, albeit in a fractured state. Fortunately, journalists and creators are gradually transforming outdated media paradigms and forging ahead into innovative territory.

The phrase “mass media” gained traction in the 1920s to characterize popular culture in the industrial age. This involved mass-produced books, films, and radio shows, providing a shared experience for audiences where many could engage with identical media content simultaneously. Prior to the 20th century, most entertainment was experienced live, with performances varying slightly from one event to the next. However, movies and radio broadcasts ensured uniformity, accessible to everyone at any given time. Just like purchasing standardized products for mass consumption, such as shoes and automobiles.

Social media did not significantly alter this model. Platforms like X, Facebook, and TikTok were designed for extensive reach and audience engagement. Every post, video, and live stream aims to captivate the broadest possible audience. While it is possible to tailor media for specific demographics or create filter bubbles, the fixation on follower counts illustrates that we remain entrenched in a mass media mindset, seeking to engage the largest number of viewers. This isn’t genuine “social” interaction; it’s merely mass-produced content under a different guise.

What if we endeavored to foster a truly social media experience devoid of algorithmic noise or political agendas? One alternative could be termed Cozy Media, which encompasses apps and content specifically crafted for nurturing connections among small groups of friends in serene, inviting settings. Envision the media counterpart of a friendly gathering, complete with card crafting or fireside chats.

The hallmark Cozy Media experience intertwines gaming elements with low-stress missions against charming backdrops. Developers are striving to replicate these cozy aesthetics in social applications. From group discussions to online book clubs, the emphasis is on comfort. Yet, it transcends mere aesthetics; Cozy Media platforms intentionally restrict interactions with random strangers, directing users instead toward trustworthy friends.

One app I’ve been utilizing frequently is Retro. Unlike Instagram, where creators often first gained exposure, Retro is primarily designed for engagement among small circles of trusted friends. There’s no algorithm promoting random content from strangers; when I log into Retro, it feels as though I’m engaging with peers rather than filtering through a deluge of nonsensical content and advertisements. My posts there are meant for a select few, allowing for meaningful interactions rather than shouting into the void of giant algorithms.

Cozy media often helps you connect with a small group of friends in a friendly and calm environment.

While Cozy Media may provide solace in chaotic times, the need for news and analytical perspectives remains. Regrettably, numerous reliable news outlets are facing turmoil. For instance, some American journalists, including those from the Washington Post, New York Times, and National Public Radio, cite dwindling resources and editorial independence.

Additionally, there are economists like Paul Krugman and tech researchers like Molly White, who have successfully launched crowdfunded newsletters. Nonetheless, many journalists prefer not to work alone, as quality reporting often necessitates collaboration. As a result, several have banded together in worker-owned cooperatives to establish new publications while benefiting from institutional resources such as legal support, editing, and camaraderie. This model is also advantageous for consumers, sparing them from the need to search for and subscribe to various individual newsletters just to keep abreast of current affairs.

The worker-owned cooperative model has already proven successful for several publications that have emerged in recent years. For example, 404 Media delivers vital news regarding the fields of technology and science. Defector is another worker-owned cooperative focused on sports and politics. Aftermath covers gaming issues, while Listen to Things specializes in music. Flaming Hydra (my contribution) publishes political analyses, interviews, and cultural critiques. Additionally, Coyote Media aims to launch in the San Francisco Bay Area to cover local news, and there are many other worker-owned local media cooperatives emerging.

Just like mass media, social media also contributes to feelings of loneliness and isolation. The essence of Cozy Media and worker-owned publications lies in the restoration of community and trust. We might be witnessing the dawn of a new information ecosystem aimed at helping us comprehend the world once more.

Annaly’s Week

What I’m reading

The Wonderful History of Mesopotamia by Moudhy Al-Rashid, between two rivers.

What I’m seeing

A new media podcast from former CNN reporter Oliver Darcy titled Power Lines.

What I’m working on

Writing an article for publication at once in Flaming Hydra.

Annalee Newitz is a science journalist and author. Their latest book is Automatic Noodles. They are co-hosts of Hugo Award-winning podcasts, and we are right. You can follow them @annaleen, and their website is techsploitation.com.

topic:

Source: www.newscientist.com

AI-Generated Summaries Lead to “Devastating” Audience Decline, Reports Online News Media

Media organizations have been alerted to the potential “devastating impacts” on their digital audiences as AI-generated summaries start to replace traditional search results.

The integration of Google’s AI summarization is causing major concern among media proprietors, as it utilizes blocks of text to condense search results. Some perceive this as a fundamental threat to organizations that rely on search traffic.

AI summaries can offer all the information users seek without necessitating a click on the original source, while links to traditional search results are relegated further down the page, thereby decreasing user traffic.

An analysis by the Authoritas Analytics Company indicates that websites previously ranked at the top of search results may experience around a 79% decrease in traffic for specific queries when results are presented through AI summaries.

The study also highlighted that links to YouTube, owned by Google’s parent company Alphabet, are more prominent than traditional search results. This investigation is part of a legal challenge against the UK’s competition regulator concerning the implications of Google’s AI summarization.

In a statement, a Google representative described the study as being “based on inaccurate and flawed assumptions and analysis,” citing a set of searches that does not accurately reflect all queries and results in outdated estimates regarding news website traffic.

“Users are attracted to AI-driven experiences, and AI features in search enable them to pose more questions, creating new avenues for discovering websites,” the spokesperson stated. “We consistently direct billions of clicks to our websites daily and do not observe a significant decline in overall web traffic, as suggested.”

A secondary survey revealed a substantial decline in referral traffic stemming from Google’s AI overview. A month-long study conducted by the US Think tank Pew Research Center found that users clicked on a link under the AI summary only once for every 100 searches.

A Google spokesperson noted that this study employed “a distorted query set that illustrates flawed methodologies and search traffic.”

Senior executives in news organizations claim that Google has consistently declined to share the necessary data to assess the impact of AI summaries.

Quick Guide

Please contact us about this story

show

The best public interest journalism relies on direct accounts from knowledgeable individuals.

If you have insights to share about this topic, please contact us confidentially using the following methods:

Secure Messages in Guardian App

The Guardian app provides a feature for submitting story tips. Messages are end-to-end encrypted and incorporated within the routine functions of all Guardian mobile apps, keeping your communications private.

If you haven’t installed the Guardian app yet, feel free to download it (iOS/Android) and navigate to the menu. Choose Secure Messaging.

SecureDrop, Instant Messenger, Email, Phone, and Postal Mail

Refer to our guide at guardian.com/tips for other methods and their respective advantages and disadvantages.

Illustration: Guardian Design / Rich Cousins

Thank you for your feedback.

Although the AI overview represents only a portion of Google search, UK publishers report feeling its effects already. MailOnline executive Carly Stephen noted a significant decline in clicks from search results featuring AI summaries in May, with click-through rates falling by 56.1% on desktop and 48.2% on mobile devices.

Legal actions against the UK’s Competition and Markets Authority involve partnerships with the technology justice organization FoxGlove, the Independent Publishers Alliance, and advocates for the Open Web movement.

Owen Meredith, the CEO of the News Media Association, accused Google of “keeping users within their own enclosed spaces and trying to monetize them by incorporating valuable content, including news produced through significant efforts of others.”

“The current circumstances are entirely unsustainable, and eventually, quality information will be eliminated online,” he stated. “The Competition and Markets Authority possesses tools to address these challenges, and action must be taken swiftly.”

Rosa Curling, Director of FoxGlove, remarked that the new research highlights “the devastating effects the Google ‘AI Overview’ has already inflicted on the UK’s independent news sector.”

“If Google merely takes on the job of journalists and presents it as its own, that would be concerning enough,” she expressed. “But what’s worse is that they use this work to promote their own tools and advantages while making it increasingly difficult for the media to connect with the readers vital for their survival.”

Source: www.theguardian.com

Are Certain Individuals Detracting from the Online Experience for Everyone?

wWhile I browse social media, I often feel disheartened by the overwhelming negativity, as if the world is ablaze with hatred. Yet, stepping into the streets of New York City for a coffee or lunch with friends presents a stark contrast—everything feels calm. This disparity between the digital realm and my everyday life is jarring.

My work addresses issues like intergroup conflict, misinformation, technology, and climate change, highlighting humanity’s challenges. Interestingly, online discussions mirror fervor over events such as the White Lotus finale and the most recent YouTuber scandal. Everything seems either exaggeratedly amazing or utterly terrible. But is that truly how most of us feel? No. Recent research indicates that the online environment is skewed by a tiny, highly active user base.

In a paper I co-authored with Claire Robertson and Carina Del Rosario, we found significant evidence that social media does not neutrally represent society; instead, it acts as a fanhouse mirror amplifying extreme voices while obscuring more moderate and nuanced perspectives. Much of this distortion stems from a small percentage of overactive online users, where just 10% of users generate about 97% of political tweets.

Take Elon Musk’s own Platform X as a case in point. Despite its vast user base, a select few create the majority of political content. For instance, Musk tweeted 1,494 times within the first 15 days of implementing government efficiency cuts (DOGE). His prolific posting often spread misinformation to 221 million followers.

On February 2nd, he claimed, “Did you know that USAID used your taxes to kill millions in a funded bioweapon study, including Covid-19?” This fits a pattern of misinformation dissemination by a small number of users, where just 0.1% share 80% of false news. Twelve accounts, dubbed the “disformation dozens,” were responsible for much of the vaccine misinformation seen on Facebook during the pandemic, creating a misleading perception of vaccine hesitancy.

Similar trends can be identified across the digital landscape. While a small faction engages in toxic behaviors, they disproportionately share hostile or misleading content on various platforms, from Facebook to Reddit. Most individuals do not contribute to fueling the online outrage; however, superusers dominate our collective perception due to their visibility and activity.

This leads to broader societal issues, as humans form mental models of what they perceive others think, shaping social norms and group dynamics. Unfortunately, on social media, this shortcut can misfire. We encounter not a representative sampling of views, but rather an extreme flow of emotionally charged content.

Consequently, many individuals mistakenly believe society is much more polarized and misinformed than it is. I tend to view those across generational gaps, political divisions, or fandoms as radical, malicious, or simply foolish. Our information diets are shaped by a sliver of humanity that incessantly posts about their work, identity, or obsessions.

Such distortion fosters pluralistic ignorance, affecting actions based on a misinterpretation of collective beliefs and behaviors. Think of voters who only witness outrage-driven narratives, leading them to assume there’s no common ground on issues like immigration and climate change.

Yet, the challenge isn’t solely about extremists—it’s the design and algorithms of these platforms that exacerbate the situation. Built to boost engagement, these algorithms favor sensational or divisive content, promoting users who are most likely to skew shared realities.

The issue is compounding. Imagine a bustling restaurant where soon it seems everyone is shouting. The same dynamics play out online, with users exaggerating their views to capture attention and approval. Even those who might not typically be extreme may mirror such behavior in order to gain traction.

Most of us are not diving into trolling battles on our phones; we’re preoccupied with family, friends, or simply seeking lighthearted entertainment online. Yet, our voices are overshadowed. We have effectively surrendered the mic to the most divisive individuals, allowing them to dictate norms and actions.

With over 5 billion people engaging on social media, this technology is here to stay. However, the toxic dynamics I’ve described don’t have to prevail. The initial step is recognizing this illusion and understanding that a silent majority often exists behind every heated thread. As users, we can take back control by curating our feeds, avoiding anger traps, and ignoring sensational content. Consider it akin to adopting a healthier, less processed informational diet.

In a recent series of experiments, we compensated participants to unlock the most divisive political narratives in X. A month later, they reported 23% less hostility towards opposing political groups. Their experiences were so positive that nearly half chose not to return to their hostile narratives post-study. Furthermore, those who nurtured a healthier news feed reported diminished hostility even 11 months later.

Platforms can easily adjust algorithms to avoid highlighting the most outrageous voices, instead prioritizing more balanced or nuanced content. This is what most people desire. The Internet is a powerful tool that can provide value. However, if we continue to reflect only a distorted funhouse version of reality shaped by extreme users, we will all face the repercussions.

Jay Van Bavel is a psychology professor at New York University.

Further Reading

The Righteous Mind by Jonathan Haidt (Penguin, £12.99)

Going Mainstream by Julia Ebner (Ithaca, £10.99)

Chaos Machine by Max Fisher (Quercus, £12.99)

Source: www.theguardian.com

Algospeak Review: Key Insights on How Social Media Accelerates Language Evolution

Social Media and Short-Form Video Platforms Drive Language Innovation

lisa5201/getty images

Algospeak
Adam Aleksic (Every (UK, July 17th) Knopf (USA, July 15th))

You won’t age, just as slang is wrapped in bamboo. In Adam Aleksic’s chapter Algospeak: How Social Media Will Change the Future of Language, this phenomenon is discussed. Phrases like “Pierce Your Gyat for Rizzler” and “WordPilled Slangmaxxing” remind me that as a millennial, I’m just as distant from boomers as today’s Alphas are.

Linguist and content creator (@etymologynerd), Aleksic has ignited a new wave of linguistic innovation fueled by social media, particularly short video platforms like TikTok. The term “Algospeak” has been traditionally linked to euphemisms used to avoid online censorship, with recent examples including “anxiety” (in reference to death) or “segg” (for sex).

However, the author insists on broadening the definition to encompass all language aspects affected by the “algorithm.” This term refers to the various, often opaque processes social media platforms use to curate content for users.

In his case, Aleksic draws on his experience of earning a living through educational videos about language. Like other creators, he is motivated to appeal to the algorithm, which requires careful word selection. A video he created dissecting the etymology of the word “pen” (tracing back to the Latin “penis”) breached sexual content rules, while a discussion on the phrase “from river to sea” remained within acceptable limits.

Meanwhile, videos that explore Gen Alpha terms like “Skibidi” (a largely nonsensical term rooted in scat singing) and “Gyat” (“Goddamn” or “Ass”) have performed particularly well. His findings illustrate how creators modify their language for algorithmic advantage, with some words transitioning online and offline to achieve notable success. When Aleksic examined educators, he found many of these terms had entered regular classroom slang, with some students learning the term “anxiety” before understanding “suicide.”

A standout aspect of his study lies in etymology, investigating how algorithms propel words from online subcultures into mainstream lexicon. He notes that the misogynistic incel community is a significant contributor to contemporary slang, evidenced by its radical nature that can outpace linguistic evolution within a group.

Aleksic approaches language trends with a non-judgmental perspective. He notes that the term “anxiety” parallels earlier euphemisms like “deceased,” while “Skibidi” is reminiscent of “Scooby-Doo.” He frequently mischaracterizes slang within arbitrarily defined generations, which claim to infuse toxic narratives into the evolution of normal languages.

The situation becomes more intricate when slang enters mainstream usage through cultural appropriation. Many contemporary slang terms, like “cool” before them, trace back to the Black community (“Thicc,” “bruh”) or originate from the LGBTQ ballroom scenes (“Slay,” “Yas,” “Queen”). Such wide-ranging adoptions can sever these terms from their historical contexts, often linked to social struggles and further entrenching negative stereotypes about the communities that birthed them.

Preventing this disruption of context is challenging. Successful slang’s fate is often to be stripped of its original nuances. Social media has drastically accelerated the timeline for language innovation. Algospeak is a necessary update, yet it can become quickly outdated. However, as long as algorithms exist, fundamental insights into how technology influences language will remain important.

Victoria Turk is a London-based author

New Scientist Book Club

Enjoy reading? Join a welcoming community of fellow book enthusiasts. Every six weeks, we explore exciting new titles, offering members exclusive access to book excerpts, author articles, and video interviews.

Topic:

Source: www.newscientist.com

What If Jesus Were a Blogger? Exploring AI-Driven Bible Stories on Social Media | Culture

jESUS strolls through the lush green field holding a selfie stick. The initial notes from Billie Eilish’s ethereal tune rise like a prayer. “It’s all good, Besties, this is my choice. Totally a genuine Save Humanity Arc,” he smiles. “Adore it for me,” Jesus playfully tucks Jonathan Van Ness’s hair behind his ears.

We transition to a new scene. He still wields a selfie stick, but now he’s wandering through a gritty town. “So, I told the team I had to die. Peter literally tried to gaslight me. It’s not dramatic, like Baby. This is a prophecy.”

Cut to Jesus at a candlelit feast. “It’s more of a conversation, so here I am in the middle of dinner. Judas couldn’t even hold my gaze,” he shakes his head, then turns to the camera, grinning at his insight. “Such a phony!”

Do you allow Instagram content?

This article contains the content provided by Instagram. You may be using cookies or other technologies, so you will ask for permission before anything is loaded. To view this content, Click “Get permission and continue.”.

Initially, videos of this genre—a retelling of biblical tales through the lens of Americanized video blog culture—may seem bizarre and sacrilegious. However, might they represent a unique synthesis of the Holy Trinity of 2025: AI, influencer culture, and rising conservatism? Are these videos indicative of our era? Do they reflect the concerns of American conservatism? Am I being subtly influenced towards Christianity? Why do these Biblical inspirations feel oddly alluring? Why can’t I look away? What’s happening to my brain?!

My first encounter with these biblical video blogs was while I lounged in bed. When the algorithm unveiled Joseph of Nazareth, I momentarily halted my endless scrolling. “Whoa, look at that fit! Ancient rock vibes.” I wiped the drool from my chin and took a moment. Although mindlessly scrolling may not usually be a cure for mental fatigue, that day, I felt like Daniel in the lion’s den or Jonah in the whale. My commitment to scrolling brought me a sense of salvation.

Do you allow TikTok content?

This article contains content from TikTok. You may be using cookies or other technologies, and will need to ask for permission before anything is loaded. To view this content, Click “Get permission and continue.”.

In my younger days, I flirted with religion. When my grandparents visited, I would kneel in prayer, attend Bible studies, and socialize with youth groups to meet friends and boys. I had a brief infatuation with Hillsong (I was 13 and just wanted to plan for a Friday night). a) The girl before me screamed, “I’ve been captured by the devil.” And b) I sneaked behind the church curtains to find the teenagers locked in each other’s glances.

My attitudes towards both faithfulness and spirituality have transformed. Now, my spiritual routine consists of exclamations like, “Jesus take the wheel!” or “What a deity!” as I snap photos of church art while traversing Catholic nations, sharing through Instagram later on.

Yet, every night, I find myself scrolling past clothing and dining suggestions while immersing myself in the cultures that birthed them. Vibrator check from last night’s gathering. Then I slide into a video blog Unboxing Trojan horses. Or perhaps a Vox Pop from Easter Monday. Followed by a series of street reactions David defeats Goliath. Something totally fascinating.

Do you allow TikTok content?

This article contains content from TikTok. You may be using cookies or other technologies, and will need to ask for permission before anything is loaded. To view this content, Click “Get permission and continue.”.

Recently, I came clean to a friend about my obsession. I was evangelizing to a fellow enthusiast. She mentioned that Jesus resembled the first influencer and that Mary and Joseph embodied the archetypal toxic vlog parents. If Judas were alive today, he would upload lengthy unedited rants on YouTube.

Momentarily, I ponder the environmental ramifications. How much water was used for Mary’s perfect dab? What resources were consumed so AI Jesus could jokingly narrate a tutorial on wine making? And how long have we been off-planet? Hold on! Shhh, the next video starts.

Adam is now seated in a podcast studio, headphones on, microphone positioned, dressed informally with leaf-patterned fabric. “So, God creates me? Boom. The first man, the parents, nothing. I… ‘Ah… I’m literally going to be everyone’s dad! When they split up, I’ll ensure they clash endlessly. Another! Another! Another! Another!”