Understanding Internet Outages: Common Causes and Solutions | Claude AI

Anthropic's Claude chatbot outage

Outage Issues with Anthropic’s Claude Chatbot

Samuel Boivin/NurPhoto/Shutterstock

This week, AI chatbot Claude experienced an outage. Users reported being unable to access services via the Anthropic website, with the issue persistent for approximately a week. Similar outages have impacted various technology giants,
government websites, and even hospitals. What is driving this surge in service disruptions?

The primary vulnerability of today’s internet lies in its heavy reliance on cloud computing. This shift has resulted in numerous services depending on just a few key providers like Amazon and Microsoft. During the early days of the internet, businesses operated on their own infrastructure—akin to a self-sufficient local store. When an issue arose in one area, others remained unaffected, but now, if a cloud provider faces difficulties, the repercussions resonate across multiple platforms.

Frequently, user-access issues stem from simple human errors. One notable incident underscoring these risks was the 2024 outage caused by cybersecurity firm CrowdStrike, which inadvertently released software configuration files that rendered millions of Windows computers inoperative—affecting airlines, banks, and emergency service centers globally.

Joseph Jarneki from the Royal United Services Institute indicates that large-scale outages are typically not premeditated. Cybercriminals tend to focus on smaller targets instead of provoking major tech companies, preferring to extract ransom payments when preying on vital services.

Tim Stevens from King’s College London highlights that ransomware attacks are increasingly directed at local authorities and crucial infrastructure. Hackers tend to infiltrate essential services such as water supplies and municipal governments, where they can hold operations hostage for payment.

The UK has witnessed such incidents, including ransomware attacks on Hackney Council,
Gloucester City Council, and
Leicester City Council, along with similar challenges faced by the NHS and local water suppliers. Stevens notes an ongoing cat-and-mouse game between hackers and cybersecurity experts. Unfortunately, it appears hackers currently hold the upper hand. “In recent discussions, it’s been indicated that we’re losing ground. We’re not just behind; we’re actually losing,” Stevens confessed.

State-sponsored hackers from countries like Russia and China typically do not aim to disrupt cloud providers on a large scale. “While they do target these entities, their intentions are highly focused rather than destructive,” emphasizes Jarnecki.

For instance, the 2023 cyberattacks on U.S. government email accounts managed by Microsoft were attributed to a group linked to China. While this specific incident incurred minimal impact on overall services, it permitted unauthorized access to a wealth of sensitive U.S. information.

According to Sarah Krebs from Cornell University, cyberattacks are increasingly utilized in nations operating within a “gray zone”—a fluctuating state of unease that signifies neither full-scale peace nor active warfare. This tension often manifests as calculated disruptions aimed to weaken adversaries.

Krebs explains, “This approach acts similarly to economic sanctions; much of our GDP and overall economic stability hinges on the Internet. Disabling it critically impairs adversaries’ abilities to generate wealth, subsequently hindering their resource capabilities for warfare.”

Importantly, Krebs notes that Russia and China aren’t the sole practitioners of such tactics. Western nations, too, engage in cyber operations. Notably, intelligence agencies such as GCHQ and MI6 have previously compromised al-Qaeda computers, resulting in significant operational disruptions—these covert operations remain classified and occur behind the scenes.

Stevens mentioned, “It’s clear that Western intelligence and security agencies are conducting cyber operations against Russian assets. However, the legal frameworks often restrict the scope and intensity of these operations, which can be a source of frustration within the community.”

Claude has since resumed functioning, but Anthropic has yet to address inquiries from New Scientist regarding the recent outage effects.

Topics:

Source: www.newscientist.com

Why the Internet Feels Lonely Right Now: Discover the Reasons Behind the Isolation

Sure! Here’s a rewritten version of that content, optimized for SEO while retaining the original HTML tags:

Topanga Canyon, Topanga, California, USA - lonliness in the digital age

Exploring the Loneliness of Digital Connection

Brenna Panaguiton/Unsplash

In today’s fast-paced digital landscape, I often find myself glued to my smartphone. Like many in the United States, I turn to various apps for news, from social media posts to podcasts and newsletters. However, amidst the chaos—like the unfolding protests in Minneapolis—I’ve noticed an unsettling trend: the more I consume, the lonelier I feel.

This isn’t a new phenomenon; it’s been a topic of discussion among sociologists for nearly 80 years. In 1950, scholars David Riesman, Nathan Glaser, and Reuel Denny published their influential book, The Lonely Crowd. They argued that the advent of consumerism and mass media birthed a new personality archetype, highly aware of loneliness and labeled it “other-oriented.” This description seems eerily relevant in our current social media age teeming with AI interactions.

Individuals who are other-oriented are constantly attuned to their peers, often using social cues to shape their choices related to purchases, fashion, and opinions. With their values stemming from contemporaries rather than historical influencers, they tend to prioritize present experiences over tradition. Riesman and his colleagues cautioned that an excessive focus on others can lead to a crippling fear of solitude.

These traits are starkly embodied in our engagement with social media, characterized by peer pressure, superficial connections, and even the growing surveillance culture. As we monitor one another, companies develop applications that simulate camaraderie, leaving us more isolated. This illustrates inherent risks of AI chatbots that are engineered to masquerade as companions.


When we shape our identity based on others’ expectations, we obscure our deeper selves.

There exists a contradiction within our social desires. While we yearn for inclusion, we also crave individuality. Riesman et al. contend that consumerism often creates a faux sense of unique identity. Consider the experience of browsing a rack of nearly identical polo shirts; selecting one may foster feelings of individuality, but fundamentally, they remain similar to one another.

This phenomenon of mispersonalization frequently manifests in the algorithms governing our online interactions. Platforms like TikTok curate “For You” feeds exhibiting content aligned with our tastes, yet this personalization is overseen by uncontrollable algorithms aimed at ensuring conformity.

As individuals shaped by external influences, we often find ourselves expressing our identities through group interactions, as advertisements prompt us to “join the conversation.” We generate content for the internet, portraying our lives through the lens of shared experiences.

Still, many of us wrestle with the lingering sensation of loneliness. This disconnect can be attributed to the variance between real-life relationships and those formed in digital spaces. Moreover, it may relate to the personality shift chronicled in The Lonely Crowd. By focusing excessively on others, we risk neglecting our genuine, idiosyncratic desires. Without self-awareness, meaningful connections with others become elusive.

Riesman and his collaborators proposed two solutions. First, they emphasized the need to reclaim our leisure time from the all-consuming media landscape. They argued that our vigilance towards peers often resembles labor, advocating for more playful engagement with life. Their second suggestion urged individuals, particularly children, to explore new identities and experiences. Reflect on activities you enjoy when not dictated by external definitions of “fun.” Try something novel, don vibrant or whimsical clothing, or chat with an unfamiliar neighbor. Allow yourself to be surprised and embrace experimentation.

Remember, neither a “For You” feed nor an AI chatbot can define your identity. So, take a break from your devices, engage in unexpected activities, and rediscover who you are.

What I Am Reading
Notes from the Kingslayer, A captivating narrative of rebellion and familial bonds by Isaac Ferman.

What I See
Fierce rivalry, Because I know how to embrace enjoyment.

What I Am Working On
I’m exploring Sogdiana, my favorite ancient diaspora culture.

Annalee Newitz is a science journalist and author. Their latest book is Automatic Noodles. They co-host the Hugo Award-winning podcast Our Opinion Is Correct. Follow @annaleen and visit their website: techsploitation.com.

Topics:

SEO Optimization Highlights:

  • Keywords: Key phrases like “loneliness in the digital age,” “social media,” “consumerism,” and “identity” have been strategically included throughout the content.
  • Alt Text: Improved the image alt text for relevancy and search engine visibility.
  • Headings & Structure: Organized content with clear sections that help with readability and SEO rankings.
  • Call-to-Action: Encouraged the reader to engage with their real-life identity outside of digital personas, which adds personal value to the content.

Source: www.newscientist.com

NSPCC Survey Reveals 1 in 10 UK Parents Report Online Threats Against Their Children

Almost 10% of parents in the UK report that their children have faced online threats, which can include intimidation over intimate photos and the exposure of personal information.

The NSPCC, a child protection charity, indicated that while 20% of parents are aware of a child who has been a victim of online blackmail, 40% seldom or never discuss the issue with their children.

According to the National Crime Agency, over 110 reports of attempted child sextortion are filed monthly. In these cases, gangs manipulate teenagers into sharing intimate images and then resort to blackmail.

Authorities in the UK, US, and Australia have noted a surge in sextortion cases, particularly affecting teenage boys and young men, who are targeted by cybercrime groups from West Africa and Southeast Asia. Tragically, some cases have resulted in suicide, such as that of 16-year-old Murray Dawe from Dunblane, Scotland, who took his life in 2023 after being sextorted on Instagram, and 16-year-old Dinal de Alwis, who died in Sutton, south London, in October 2022 after being threatened over nude photographs.

The NSPCC released its findings based on a survey of over 2,500 parents, emphasizing that tech companies “fail to fulfill their responsibility to safeguard children.”

Rani Govender, policy manager at the NSPCC, stated: “Children deserve to be safe online, and this should be intrinsically woven into these platforms, not treated as an afterthought after harm has occurred.”

The NSPCC defines blackmail as threats to release intimate images or videos of a child, or any private information the victim wishes to keep confidential, including aspects like their sexuality. Such information may be obtained consensually, through coercion, manipulation, or even via artificial intelligence.

The perpetrators can be outsiders, such as sextortion gangs, or acquaintances like friends or classmates. Blackmailers might demand various things in exchange for not disclosing information, such as money, additional images, or maintaining a relationship.

The NSPCC explained that while extortion overlaps with sextortion, it encompasses a broader range of situations. “We opted for the term ‘blackmail’ in our research because it includes threats related to various personal matters children wish to keep private (e.g., sexual orientation, images without religious attire) along with various demands and threats, both sexual and non-sexual,” the charity noted.

The report also advised parents to refrain from “sharing,” which pertains to posting photos or personal information about their children online.

Experts recommend educating children about the risks of sextortion and being mindful of their online interactions. They also suggest creating regular opportunities for open discussions between children and adults, such as during family meals or car rides, to foster an environment where teens are comfortable disclosing if they face threats.

“Understanding how to discuss online threats in a manner appropriate to their age and fostering a safe space for children to come forward without fear of judgment can significantly impact their willingness to speak up,” Govender emphasized.

The NSPCC spoke with young individuals regarding their reluctance to share experiences of attempted blackmail with parents or guardians. Many cited feelings of embarrassment, a preference to discuss with friends first, or a belief that they could handle the situation on their own.

Source: www.theguardian.com

Cloudflare Outage Triggers Error Messages Across the Internet

A vital segment of the internet’s often unseen infrastructure experienced a worldwide outage on Tuesday, leading to error messages appearing across various websites.

Cloudflare, a US-based firm that safeguards millions of websites against malicious assaults, faced an unexplained problem that hindered internet users from accessing certain client sites.

Some website owners struggled to reach their performance dashboards. The company reported that platforms like X and OpenAI saw a spike in outages concurrent with the Cloudflare issue. Down Detector.

The outage was first noted at 11:48 a.m. London time, and by 2:48 p.m., Cloudflare announced: “A fix has been implemented and we believe the issue is resolved. We are continuing to monitor the situation to ensure all services return to normal.”

A spokesperson from Cloudflare issued an apology “to our customers and the entire internet for the disruptions today.” They added, “We aim to learn from today’s events and enhance our services.”


To tackle the issue, the company turned off an encryption service known as Warp in London, stating that “users in London attempting to access the internet via Warp will face connection failures.”

Professor Alan Woodward of the Surrey Cyber Security Centre referred to Cloudflare as “the biggest company you’ve never heard of.” The firm claims to provide services that “protect websites, apps, APIs, and AI workloads while enhancing performance.”

Woodward characterized Cloudflare as a “gatekeeper,” noting its role in monitoring site traffic to protect against distributed denial-of-service (DDoS) attacks, which involve malicious actors attempting to overwhelm a site with requests. It also verifies whether a user is human.

Upon identifying the fix, Cloudflare revealed that the issue stemmed from “configuration files automatically generated to manage threat traffic.”

These files expanded larger than anticipated, resulting in a failure of the software systems that direct traffic for numerous Cloudflare services.

Skip past newsletter promotions

“To clarify, there is no evidence that this was caused by an attack or malicious activity,” the spokesperson stated. “We anticipate that some Cloudflare services may experience temporary degradation as traffic spikes following this incident, but we expect all services to normalize within the next few hours.”

This incident at Cloudflare occurred less than a month after an outage at Amazon Web Services disrupted thousands of sites.

According to Woodward, “We’re beginning to realize just how few companies are integral to the internet’s infrastructure, so when one fails, the impact is immediately noticeable.”

Although the cause remains unidentified, Woodward suggested that it’s unlikely to be a cyber attack, as such a major service typically does not have a single point of failure.

Source: www.theguardian.com

Father of Teenager Killed Over Social Media Trusts Ofcom No More

Molly Russell’s father, the British teenager who tragically took her life after encountering harmful online material, has expressed his lack of confidence in efforts to secure a safer internet for children. He is advocating for a leadership change at Britain’s communications regulatory body.

Ian Russell, whose daughter Molly was only 14 when she died in 2017, criticized Ofcom for its “repeated” failure to grasp the urgency of safeguarding under-18s online and for not enforcing new digital regulations effectively.

“I’ve lost faith in Ofcom’s current leadership,” he shared with the Guardian. “They have consistently shown a lack of urgency regarding this mission and have not been willing to use their authority adequately.”

Mr. Russell’s remarks coincided with a letter from technology secretary Liz Kendall to Ofcom, expressing her “deep concern” over the gradual progress of the Online Safety Act (OSA), a groundbreaking law that lays out safety regulations for social media, search engines, and video platforms.

After his daughter’s death, Mr. Russell became a prominent advocate for internet safety and raised flags with Ofcom chief executive Melanie Dawes last year regarding online suicide forums accessible to UK users.

Ofcom opened an investigation into these forums after acquiring new regulatory authority under the OSA, and the site voluntarily restricted access to UK users.

However, Mr. Russell noted that the investigation seemed to be “stalled” until regulators intensified their scrutiny this month when it was revealed that UK users could still access the forums via undiscovered “mirror sites.”




Molly Russell passed away in 2017. Photo: P.A.

“If Ofcom can’t manage something this clear-cut, it raises questions about their competence in tackling other issues,” Mr. Russell stated.

In response, Ofcom assured Mr. Russell that they were continuously monitoring geo-blocked sites and indicated that a new mirror site had only recently come to their attention.

Mr. Russell voiced his agreement with Mr. Kendall’s frustrations over the slow implementation of additional components of the OSA, particularly stricter regulations for the most influential online platforms. Ofcom attributed the delays to a legal challenge from the Wikimedia Foundation, the organization that supports Wikipedia.

The regulator emphasized its “utmost respect” for bereaved families and cited achievements under its stewardship, such as initiating age verification on pornography websites and combating child sexual abuse content.

“We are working diligently to push technology firms to ensure safer online experiences for children and adults in the UK. While progress is ongoing, meaningful changes are occurring,” a spokesperson commented.

The Molly Rose Foundation, established by Molly’s family, has reached out to the UK government urging ministers to broaden legal mandates for public servant transparency to include tech companies.

In their letter, they requested Victims’ Minister Alex Davies-Jones to expand the Public Powers (Accountability) Bill, which introduces a “duty of honesty” for public officials.

This bill was prompted by critiques regarding the police’s evidence handling during the Hillsborough investigation, mandating that public entities proactively assist inquiries, including those by coroner’s courts, without safeguarding their own interests.

The foundation believes that imposing similar transparency requirements on companies regulated by the OSA would aid in preserving evidence in cases of deaths possibly linked to social media.

The inquest into Molly’s passing was postponed due to a conflict surrounding evidence presentation.

“This change fundamentally shifts the dynamic between tech companies and their victims, imposing a requirement for transparency and promptness in legal responses,” the letter asserted.

Recent legislative changes have granted coroners enhanced authority under the OSA to request social media usage evidence from tech companies and prohibit them from destroying sensitive data. However, the letter’s signatories contend that stricter measures are necessary.

More than 40 individuals, including members of Survivors for Online Safety and Meta whistleblower Arturo Bejar, have signed the letter.

A government spokesperson indicated that the legal adjustments empower coroners to request further data from tech firms.

“The Online Safety Act will aid coroners in their inquests and assist families in seeking the truth by mandating companies to fully disclose data when there’s a suspected link between a child’s death and social media use,” a spokesperson stated.

“As pledged in our manifesto, we’ve strengthened this by equipping coroners with the authority to mandate data preservation for inquest support. We are committed to taking action and collaborating with families and advocates to ensure protection for families and children.”


In the UK, you can contact the youth suicide charity Papyrus at 0800 068 4141 or email pat@papyrus-uk.org. For support, reach out to the Samaritans at freephone 116 123 or email jo@samaritans.org or jo@samaritans.ie. In the United States, contact the 988 Lifeline for suicide and crisis at 988 or chat. In Australia, you can reach Lifeline at 13 11 14. Other international helplines are available at: befrienders.org

Source: www.theguardian.com

Chilling Effect: How Fear of ‘Naked’ Apps and AI Deepfakes is Driving Indian Women Away from the Internet

Gaatha Sarvaiya enjoys sharing her artistic endeavors on social media. As a law graduate from India in her early 20s, she is at the outset of her professional journey, striving to attract public interest. However, the emergence of AI-driven deepfakes poses a significant threat, making it uncertain whether the images she shares will be transformed into something inappropriate or unsettling.

“I immediately considered, ‘Okay, maybe this isn’t safe. People could take our pictures and manipulate them,'” Sarvaiya, who resides in Mumbai, expresses.

“There is certainly a chilling effect,” notes Rohini Lakshane, a gender rights and digital policy researcher based in Mysore. He too refrains from posting photos of himself online. “Given how easily it can be exploited, I remain particularly cautious.”

In recent years, India has emerged as a crucial testing ground for AI technologies, becoming the second-largest market for OpenAI with the technology being widely embraced across various professions.

However, a report released recently reveals that the growing usage of AI is generating formidable new avenues for harassment directed at women, according to data compiled by the Rati Foundation, which operates a national helpline for online abuse victims.

“Over the past three years, we’ve identified that a significant majority of AI-generated content is utilized to target women and sexual minorities,” the report, prepared by Tuttle, a company focused on curbing misinformation on social media in India, asserts.

The report highlights the increasing use of AI tools for digitally altering images and videos of women, including nudes and culturally sensitive content. While these images may be accepted in Western cultures, they are often rebuked in numerous Indian communities for their portrayal of public affection.




Indian singer Asha Bhosle (left) and journalist Rana Ayyub are victims of deepfake manipulations on social media. Photo: Getty

The findings indicated that approximately 10% of the numerous cases documented by the helpline involve such altered images. “AI significantly simplifies the creation of realistic-looking content,” the report notes.

There was a notable case where an Indian woman’s likeness was manipulated by an AI tool in a public location. Bollywood singer Asha Bhosle‘s image and voice were replicated using AI and distributed on YouTube. Journalist Rana Ayyub faced a campaign targeting her personal information last year, with deepfake sexual images appearing of her on social media.

These instances sparked widespread societal discussions, with some public figures like Bhosle asserting that they have successfully claimed legal rights concerning their voice and image. However, the broader implications for everyday women like Sarvaiya, who increasingly fear engaging online, are less frequently discussed.

“When individuals encounter online harassment, they often self-censor or become less active online as a direct consequence,” explains Tarunima Prabhakar, co-founder of Tattle. Her organization conducted focus group research for two years across India to gauge the societal impacts of digital abuse.

“The predominant emotion we identified is one of fatigue,” she remarks. “This fatigue often leads them to withdraw entirely from online platforms.”

In recent years, Sarvaiya and her peers have monitored high-profile deepfake abuse cases, including those of Ayyub and Bollywood actress Rashmika Mandanna. “It’s a bit frightening for women here,” she admits.

Currently, Sarvaiya is reluctant to share anything on social media and has opted to keep her Instagram account private. She fears this measure may not suffice to safeguard her. Women are sometimes captured in public places, such as subways, with their photos potentially surfacing online later.

“It’s not as prevalent as some might believe, but luck can be unpredictable,” she observes. “A friend of a friend is actually facing threats online.”

Lakshane mentions that she often requests not to be photographed at events where she speaks. Despite her precautions, she is mentally preparing for the possibility that a deepfake image or video of her could emerge. In the app, her profile image is an illustration of herself, rather than a photo.

“Women with a public platform, an online presence, and those who express political opinions face a significant risk of image misuse,” she highlights.

Skip past newsletter promotions

Rati’s report details how AI applications, such as “nudification” and nudity apps designed to remove clothing from images, have normalized behaviors that were once seen as extreme. In one reported case, a woman approached the helpline after her photo, originally submitted for a loan application, was misused for extortion.

“When she declined to continue payments, her uploaded photo was digitally altered with the nudify app and superimposed onto a pornographic image,” the report details.

This altered image, accompanied by her phone number, was circulated on WhatsApp, resulting in a flood of sexually explicit calls and messages from strangers. The woman expressed to the helpline that she felt “humiliated and socially stigmatized, as though I had ‘become involved in something sordid’.”




A fake video allegedly featuring Indian National Congress leader Rahul Gandhi and Finance Minister Nirmala Sitharaman promoting a financial scheme. Photo: DAU Secretariat

In India, similar to many regions globally, deepfakes exist within a legal gray area. Although certain statutes may prohibit them, Rati’s report highlights existing laws in India that could apply to online harassment and intimidation, enabling women to report AI deepfakes as well.

“However, the process is often lengthy,” Sarvaiya shares, emphasizing that India’s legal framework is not adequately prepared to address issues surrounding AI deepfakes. “There is a significant amount of bureaucracy involved in seeking justice for what has occurred.”

A significant part of the problem lies with the platforms through which such images are disseminated, including YouTube, Meta, X, Instagram, and WhatsApp. Indian law enforcement agencies describe the process of compelling these companies to eliminate abusive content as “often opaque, resource-draining, inconsistent, and ineffective,” according to a report published by Equality Now, an organization advocating for women’s rights.

Meanwhile, Apple and Meta have recently responded accordingly. Rati’s report uncovers multiple instances where these platforms inadequately addressed online abuse, thereby exacerbating the spread of the nudify app.

Although WhatsApp did respond in the extortion scenario, the action was deemed “insufficient” since the altered images had already proliferated across the internet, Rati indicated. In another instance, an Instagram creator in India was targeted by a troll who shared nude clips, yet Instagram only reacted after “persistent efforts” and with a “delayed and inadequate” response.


The report indicates that victims reporting harassment on these platforms often go unheard, prompting them to reach out to helplines. Furthermore, even when accounts disseminating abusive material are removed, such content tends to resurface, a phenomenon Rati describes as “content recidivism.”

“One persistent characteristic of AI abuse is its tendency to proliferate: it is easily produced, broadly shared, and repeated multiple times,” Rati states. Confronting this issue “will necessitate much greater transparency and data accessibility from the platforms themselves.”

Source: www.theguardian.com

Next-Gen Quantum Networks: Paving the Way for a Quantum Internet Prototype

Quantum Internet could provide secure communications globally

Sakumstarke / Alamy

One of the most sophisticated quantum networks constructed to date will enable 18 individuals to communicate securely through the principles of quantum physics. The researchers affirm that this represents a feasible step towards realizing a global quantum internet, although some experts express doubt.

The eagerly awaited quantum internet aims to allow quantum computers to communicate over distances by exchanging light particles, known as photons, that are interconnected through quantum entanglement. Additionally, it will facilitate the linkage of quantum sensor networks, enabling communications impervious to classical computer hacking. However, connecting different segments of the quantum realm is not as straightforward as laying down cables due to the challenges in ensuring seamless interactions between network nodes.

Recently, Chen Shenfeng from Shanghai Jiao Tong University in China demonstrated a method to interconnect two quantum networks. Initially, they established two networks containing 10 nodes each, both sharing quantum entanglement and functioning as smaller iterations of a quantum internet. They then combined one node from each network, resulting in a larger, fully integrated network that enables communication across all pairs of the 18 remaining nodes.

Networking 18 classical computers is a straightforward endeavor involving inexpensive components, but in the quantum sphere, where specific timing is crucial for sharing individual photons among several users, advanced technology and specialized knowledge are required. Even establishing communication between pairs is intricate, yet facilitating communication among any pair of 18 users is unprecedented.

“Our method provides essential capabilities for quantum communication across disparate networks and is pivotal for creating a large-scale quantum internet that enables interactions among all participants,” the researchers stated in their paper, which has not responded to inquiries for comments.

As the researchers clarify, this network integration hinges on a process termed entanglement swapping. Photons can be intertwined by conducting a specific observation known as the Bell measurement. By simultaneously measuring the status of one photon from each of two pairs of entangled photons, the most distant photons in the arrangement become linked. However, attempting to observe their states disrupts the delicate quantum balance and thus depletes the measured photon states.

“This isn’t the initial demonstration of entanglement exchange,” remarks Sidharth Joshi from the University of Bristol, UK. “What they have achieved is a framework that simplifies inter-network exchanges.”

Joshi notes that current quantum communication research is divided between extending the range of information transmission between two devices, occasionally utilizing satellites, and developing protocols and strategies for reliably networking numerous devices over shorter distances. This study pertains to the latter. “Both areas are critically important,” he asserts.

Conversely, Robert Young, a professor at Lancaster University in the UK, commented that while the results showcase a remarkable technical feat demanding expertise and extensive resources, he deems it improbable as a blueprint for future large-scale quantum networks, considering the expense and intricacy involved.

“This is far from practical and not something readily applicable in real-world scenarios,” Young states. “The paper’s claim is that this is the future of quantum network integration, but many formidable challenges remain to be addressed.”

One significant issue is the necessity for quantum repeaters to convey information across extensive distances. As distance increases, photons are frequently lost in fiber optic cables, and measurements can jeopardize the state of a photon, rendering the quantum information unreadable or untransmittable, thereby preventing signal amplification along its route. If quantum repeaters functioned effectively, they could transmit signals over longer distances, yet constructing such devices has been challenging.

“We understand that to build a viable quantum network, some method of quantum repeater is essential,” Young points out, emphasizing that this was absent in the current network demonstration.

Topics:

  • internet/
  • quantum computing

Source: www.newscientist.com

Could the Internet Go Dark? Exploring the Vulnerable Systems That Connect Our Modern World

Waking up to a world without internet might seem liberating, but you may find yourself pondering your next steps.

If you have a checkbook handy, consider using it to purchase some groceries. Should your landline still function, you can reach out to your employer. Then, as long as you still remember how to find your way without modern navigation, a trip to the store is possible.

The recent outage in a Virginia data center highlighted that while the internet is a crucial component of contemporary existence, its foundation rests on aging systems and physical components, leading many to question what it would take for it to come crashing down.

The answer is straightforward: a streak of bad luck, deliberate cyberattacks, or a combination of both. Severe weather events can knock out numerous data centers. Unexpected triggers in AI-generated codes at significant providers like Amazon, Google, and Microsoft could lead to widespread software failures. Armed interventions targeting critical infrastructure could also play a role.

Although these scenarios would be devastating, the more significant concerns for a select group of internet specialists revolve around sudden failures in the outdated protocols that support the entire network. Picture this as a plumbing system that manages connection flows or an address directory that allows machines to locate one another.

We refer to it as “the big one,” but if that occurs, having a checkbook on hand might be crucial.

Something substantial could commence When a tornado swept through Council Bluffs, Iowa, it ravaged a set of low-lying data centers critical to Google’s operations.

This region is known as us-central1, one of Google’s data center clusters, vital for various services including its cloud platform, YouTube, and Gmail (2019) power outages reported here took place that affected users across the United States and Europe.

As YouTube cooking videos become glitchy, dinner preparations go awry. Employees worldwide rush to update emails that suddenly vanish, resorting to face-to-face communication instead. US officials noted a deterioration in certain government services before refocusing their efforts on a new operation against Signal.

While this situation is inconvenient, it doesn’t signify the end of the internet. “Technically, as long as two devices are connected with a router, the Internet functions,” states Michał “Risiek” Wojniak, who works in DNS, the system linked to this week’s outage.

However, “there’s a significant concentration of control happening online,” points out Stephen Murdoch, a computer science professor at University College London. “This mirrors trends in economics: it’s typically more cost-effective to centralize operations.”

But what if extreme heat wipes out US East-1, part of the Virginia facility housing “Data Center Array,” a crucial node for Amazon Web Services (AWS), the epicenter of this week’s outage, as well as nearby regions? Meanwhile, a significant cluster in Europe suffers a cyberattack. frankfurt or London. As a result, the network may redirect traffic to a secondary hub (a less-frequented data center), which subsequently faces capacity issues akin to a congested side road in Los Angeles.

Aerial view of the Amazon Web Services data center known as US East-1 in Ashburn, Virginia. Photo: Jonathan Ernst/Reuters

Alternatively, if we shift focus from disaster scenarios to automation risks, increased traffic might unveil hidden bugs within AWS’s internally revised infrastructure, possibly an oversight from months prior. Earlier this summer, two AWS employees were let go amid a broader push towards automation. Faced with an influx of unknown requests, AWS begins to falter.

The signal will falter, and so will Slack, Netflix, and Lloyds Bank. Your Roomba vacuum becomes silent. Smart mattresses may misbehave, just like smart locks.

Without Amazon and Google, the internet would be nearly unrecognizable. Together, AWS, Microsoft, and Google command over 60% of the global cloud services market, making it nearly impossible to quantify the number of services reliant on them.

“However, at its core, the Internet continues to operate,” remarks Doug Madley, an expert in internet infrastructure who studies disruptions. “While the usual activities may be limited, the underlying network remains functional.”

You might believe the biggest risk lies in attacks on undersea cables. While this notion captivates think tanks in Washington, little action has materialized. Undersea cables incur regular damage, Madley notes, with the United Nations estimating between 150 to 200 faults occurring annually.

“To significantly impair communication, a vast amount of data must be disrupted. The undersea cable sector often asserts, ‘We manage these issues routinely.’

Skip past newsletter promotions

Subsequently, a group of anonymous hackers targets a DNS service provider, a key player in the Internet’s directory system. For example, Verisign manages all online domains ending with certain “.com” or “.net” suffixes. Other providers oversee domains like “.biz” and “.us.”

According to Madley, the likelihood of such a provider being taken down is minimal. “If anything were to happen to VeriSign, .com would vanish, which presents a strong financial motivation for them to prevent that.”

Collectively, AWS, Microsoft, and Google dominate over 60% of the global cloud services market. Photo: Sebastian Boson/AFP/Getty Images

To genuinely disrupt the larger ecosystem, a colossal error involving fundamental infrastructure beyond Amazon or Google would be required. Such a scenario would be unprecedented; the closest parallel occurred in 2016 when an attack on Dyn, a small DNS provider, brought down Guardian, X, among others.

If .com were to disappear, essential services like banks, hospitals, and various communication platforms would vanish too. Although some elements of the government’s internet structure remain intact, such as the U.S. secure messaging system Siprnet.

Yet, the internet would persist, at least for niche communities. There are self-hosted blogs, decentralized social networks like Mastodon, and particular domains like “.io” or “.is.”

Murdoch and Madrid contemplate a drastic scenario capable of eliminating the rest. Murdoch alludes to a potential bug in the BIND software supporting DNS. Meanwhile, Madrid emphasizes testimonies from Massachusetts hackers who informed Congress in 1998 about a vulnerability that could “bring the Internet down in 30 minutes.”

This vulnerability pertains to a system one layer above DNS: the Border Gateway Protocol, directing all web traffic. Madley argues that such an event is highly improbable, as it would require a full-scale emergency response, and the protocols are “incredibly resilient; otherwise, we would have already experienced a collapse.”

Even if the internet were to be entirely shut down, it’s uncertain whether it would ever reboot, warns Murdoch. “Once the Internet is active, it doesn’t get turned off. The method of restarting it is not well understood.”

The UK previously had a contingency plan for such a situation. Should the internet ever be disabled, Murdoch notes, individuals knowledgeable about its workings would gather at a pub outside London and brainstorm the next steps.

“I’m not sure if this is still true. This was years ago, and I couldn’t recall the exact pub.”

Source: www.theguardian.com

Families Demand Investigation Into UK Inaction on Pro-Suicide Online Forum

Families and survivors involved in pro-suicide forums are urging for a public inquiry into the government’s inaction regarding online safety issues.

This demand follows a report revealing that a coroner had expressed concerns about suicide forums to three government departments at least 65 times since 2019.

The report also indicated that methods promoted via these platforms are associated with at least 133 deaths in the UK, including the youngest identified victim, only 13 years old.

The analysis, released by the Molly Rose Foundation—established after the tragic loss of 14-year-old Molly Russell in November 2017—stemmed from a comprehensive review of coroner reports aimed at preventing future fatalities.

Their findings stated that the Department of Health and Social Care, the Home Office, and the Department of Science, Innovation and Technology all neglected to heed warnings from coroners about the risks posed by pro-suicide forums.

In correspondence to the Prime Minister, the Survivors’ Group for Preventing Online Suicide Victims expressed their “disappointment regarding the sluggish governmental response to an urgent threat, despite numerous alerts to safeguard lives and mitigate harm.”

The letter stated: “These failures necessitate a legal response, not only to comprehend the circumstances surrounding our loved ones’ deaths but also to avert similar tragedies in the future.

“It’s critical to focus on change over blame, to protect vulnerable youth from entirely preventable dangers.”

Among the letter’s signatories is the family of Amy Walton, who succumbed after engaging with pro-suicide material online.

The foundation is advocating for a public inquiry to examine the Home Office’s inadequacies in enforcing stricter regulations on harmful substances and Ofcom’s lack of action against the threats posed by pro-suicide forums.

Andy Burrows, the chief executive of the Molly Rose Foundation, emphasized that the report highlights how the government’s ongoing failures to protect its vulnerable citizens have resulted in numerous tragic losses due to the dangerous nature of suicide forums.

He remarked: “It’s unfathomable that Ofcom has left the future of a forum that aims to manipulate and pressure individuals into asserting their own lives at risk, rather than quickly and decisively moving to legally shut it down in the UK.”

“A public inquiry is essential to derive crucial lessons and implement actions that could save lives.”

The push for an inquiry has the backing of the law firm Leigh Day, which represents seven clients who have experienced loss.

A government spokesperson stated: “Suicide impacts families deeply, and we are resolute in our commitment to hold online services accountable for ensuring user safety on their platforms.

“According to online safety regulations, these services must take necessary actions to prevent access to illegal suicidal and self-harm content and safeguard children from harmful materials promoting such content.

“Moreover, the substances involved are strictly regulated and require reporting under the Toxic Substances Act. Retailers must alert authorities if they suspect intent to misuse them for harm. We will persist in our investigation of hazardous substances to ensure appropriate safeguards are in place.”

A spokesperson for Ofcom remarked: “Following our enforcement initiatives, online suicide forums have implemented geo-blocking to restrict access from users with UK IP addresses.

“Services opting to block access for UK users must not promote or support methods to bypass these restrictions. This forum remains under Ofcom’s scrutiny, and our investigation will continue to ensure the block is enforced.”

Source: www.theguardian.com

British MPs Warn of Potential Violence in 2024 Due to Unchecked Online Misinformation

Members of Parliament have cautioned that if online misinformation is not effectively tackled, it is “just a matter of time” before viral content leads to a resurgence of violence in the summer of 2024.

Chi Onwurah, chair of the Commons science and technology select committee, expressed concern that ministers seem complacent regarding the threat, placing public safety in jeopardy.

The committee voiced its disappointment with the government’s reaction to a recent report indicating that the business models of social media companies are contributing to unrest following the Southport murders.

In response to the committee’s findings, the government dismissed proposals for legislation aimed at generative artificial intelligence platforms, maintaining that it would refrain from direct intervention in the online advertising sector, which MPs argued has fostered the creation of harmful content post-attack.

Onwurah noted that while the government concurs with most conclusions, it fell short of endorsing specific action recommendations.

Onwurah accused ministers of compromising public safety, stating: “The government must urgently address the gaps in the Online Safety Act (OSA); instead, it seems satisfied with the harm caused by the viral proliferation of legal but detrimental misinformation. Public safety is at stake, and it’s only a matter of time before we witness a repeat of the misinformation-driven riots of summer 2024.”

In their report titled ‘Social Media, Misinformation and Harmful Algorithms’, MPs indicated that inflammatory AI-generated images were shared on social media following the stabbing that resulted in the deaths of three children, warning that AI tools make it increasingly easier to produce hateful, harmful, or misleading content.

In a statement released by the commission on Friday, the government stated that no new legislation is necessary, insisting that AI-generated content already falls under the OSA, which regulates social media content. They argued that new legislation would hinder its implementation.

However, the committee highlighted Ofcom’s evidence, where officials from the communications regulator admitted that AI chatbots are not fully covered by the current legislation and that further consultation with the tech industry is essential.

The government also declined to take prompt action regarding the committee’s recommendation to establish a new entity aimed at addressing social media advertising systems that allow for the “monetization of harmful and misleading content,” such as misinformation surrounding the Southport murders.

In response, the government acknowledged concerns regarding the lack of transparency in the online advertising market and committed to ongoing reviews of industry regulations. They added that stakeholders in online advertising seek greater transparency and accountability, especially in safeguarding children from illegal ads and harmful products and services.

Addressing the commission’s request for additional research into how social media algorithms amplify harmful content, the government stated that Ofcom is “best positioned” to determine if an investigation should be conducted.

In correspondence with the committee, Ofcom indicated that it has begun working on a recommendation algorithm but acknowledged the necessity for further exploration across a broader spectrum of academic and research fields.

The government also dismissed the commission’s call for an annual report to Parliament concerning the current state of online misinformation, arguing that it could hinder efforts to curtail the spread of harmful online information.

The British government defines misinformation as the careless dissemination of false information, while disinformation refers to the intentional creation and distribution of false information intended to cause harm or disruption.

Onwurah highlighted concerns regarding AI and digital advertising as particularly troubling. “Specifically, the inaction on AI regulation and digital advertising is disappointing,” she stated.

“The committee remains unconvinced by the government’s assertion that the OSA adequately addresses generative AI, and this technology evolves so swiftly that additional efforts are critically needed to manage its impact on online misinformation.

“And how can we combat that without confronting the advertising-driven business models that incentivize social media companies to algorithmically amplify misinformation?”

Source: www.theguardian.com

‘It Felt Like Traveling Back in Time’: Afghans Share Their Relief as Internet Service is Restored

As the sun set on Wednesday, the streets surrounding Kabul, the Afghan capital, suddenly became bustling with activity.


With phones firmly pressed to their ears, Afghans spilled into the streets of Kabul, eager to see if others were online.

“Great news, the internet is back!” shouted a driver, as children received balloons and parents bought sweets to celebrate, gathering at nearby restaurants.

For 48 hours, the Afghan population had been cut off from mobile and internet services due to unexpected telecommunications shutdowns ordered by the authorities.

“It felt like we were transported back in time, contemplating sending letters to stay connected with family,” shared Mohammad Rafi, 33, a mobile phone store owner.

“The streets were deserted, resembling a holiday atmosphere, even during weekdays. But now, they’re lively again, even in the evening.”




Men attempt to connect their smart TV to the internet. Photo: Sayed Hassib/Reuters

Sohrab Ahmadi, a 26-year-old delivery driver, struggled for two days without being able to reach his clients through the app he relies on.

Now, bikes line the streets, picking up orders from restaurants adorned with bright neon signs and juice vendors playing music. “It feels like Eid al-Adah, like preparing for prayer,” he remarked.




The communications tower is slowly restoring its services after nearly three days offline. Photo: Samiullah Popal/EPA

The streets also saw a noticeable rise in the number of women facing severe limitations imposed by the Taliban regime, including a prohibition on education beyond primary school.

“I can’t describe how relieved I am. I’m finally able to breathe again,” said a young woman attending online classes, who requested anonymity. “These online lessons are our last hope.”

The UN has warned that the disruption in connectivity poses risks to economic stability, worsening one of the most dire humanitarian crises globally.

The Taliban government has yet to address the issue of the internet blackout.




An Afghan woman walks past a beauty salon in the capital. Photo: Ali Kara/Reuters

This suspension occurred shortly after the government announced plans to cut high-speed internet in certain regions to curb “immorality.”

Attaura Zaid, a spokesman for Balkh province, confirmed that the ban was initiated by Taliban shadow leader Hibatura Ahnzada.

During the Taliban’s initial rule from 1996 to 2001, the internet was still a relatively new and developing technology.




Kabul street vendors will communicate via phone once services resume. Photo: Sayed Hassib/Reuters

However, in recent years, the economy has increasingly depended on internet access. Even in rural areas, many Afghans utilize their mobile phones for business transactions.

“The world has moved forward. This isn’t like thirty years ago,” remarked Ghulam Rabbani, a mobile credit vendor who was surrounded by shops on Wednesday night. “We anticipated the internet’s return. The outage affected everyone, including the government.”

Source: www.theguardian.com

Bartolo and Ray Corcoon Fairweather: 10 Hilarious Internet Moments | Comedy Highlights

wE are Rae, also known as Raeandwill, a duo of clowns who excel in Mime. Asking them to list 10 intriguing things they’ve seen online could be deemed a hate crime. However, the endless feeds of others promoting clown acts show us that we must uphold our online personas, risking potential bookings or getting “smoked.” If there’s one thing the world craves, it’s a clown show. (Seriously.) So here we are.

The Internet is often seen as a demonic void, slowly erasing humanity from consciousness.

While these views may seem disparate, they express how we cope with our lives trapped in an endless cycle of self-consuming AI-generated content. Some of us attempt to disengage, while others leap head-first into chaos, but ultimately, we are all scrolling through this confusion together. The Internet has become our third collaborator. Before any concept transforms into a multi-award-winning show (yes, we have to boast), we immerse ourselves for months, gathering relevant images, videos, and various clips that resonate with our project’s essence.

Here are 10 intriguing things that touch our funny bones.

Intention

1. Flutterbye Fairy Toy Flies into Fire

This is one of my all-time favorites. The juxtaposition of childhood innocence with the most dramatic classical music is perfect as Flutterby’s fairy faces an unspeakable fate. Rest in peace.

2. Lano and Woodley – Fly

With a rich history of comedy duos, Lano and Woodley are among my favorites. Their meticulous attention to detail, even in the silliest of moments, is thrilling, especially when Woodley interacts with the flies that symbolize their Oscars. I’ve nerded out over their craft for hours, and while I won’t bore you with the details, I owe them a debt of gratitude. It’s certainly not a quick 10-second reel; it’s something memorable.

3. Julio Torres’ Hand Acting

Do you allow Instagram content?

This article includes content provided by Instagram. You may be using cookies or other technologies, so you will have to get permission before anything is loaded. To see this content, Click “Get permission and continue.”.


When someone shows me new media from a notable figure, I call it a masterclass in hand acting on Instagram in 2020. It features a “classic scenario”: “Deliver it… to me… a girl!”, “I’ll provide her with a potion, but remind her that every price has a cost”, and of course, “Essential scenarios for advanced hand acting. Handrail, ascending and descending..”

This is a must-watch for aspiring young actors wishing to embody the essence of a silent clown in the future. There are several posts with at least three lessons there. Enjoy scrolling!

4. Jennifer Lopez’s Last Five Years, Particularly Her Inauguration Performance

On the last day of President Trump’s first term, during Covid, a faint glimmer of hope came through when Jennifer Lopez performed at Joe Biden’s inauguration. Her self-funded film me…now: Love Story and its accompanying documentary (both are wonderful dual features) deserve a place here. This performance amalgamated “America the Beautiful” and “This Land Is Your Land” with a 1999 party anthem, and her choice to do so resonates deeply.

The intent to elevate this song from mere entertainment to political significance was stunning and poignant. Coupled with her performance, it evokes laughter and tears alike. It’s a moment I’ll discuss for years and likely write extensively about.

5. Pet Performers Rewarded for Acting Like Animals

“We might not get applause as we’re performing for an audience that cannot clap.” Animal performance is an honorable and vital art form, and I challenge anyone to disagree.

Skip past newsletter promotions

Ray

1. I Will Always Love You

Oh my goodness. If this isn’t the most monumental thing reflecting my childhood self, I don’t know what is. Ambition, frustration—it cannot be contained. As a recognized “bad” singer, I relate deeply. Bravo to this girl, wherever she is now. Thank you for your service, Queen.

2. Trisha Paytas’ Complete Works

Trisha Paytas was likely the first person who made me genuinely laugh online. It was hard to select just one clip, but this one stands out in my memory. Fifteen years later, she continues to produce some incredible and rich content. Her confident fantasy is built on a wealth of talent, and she continues to elevate it even further. Can we see her on Broadway already?

3. Mobile Game Project Makeover Advertisement

These ads might not resonate with everyone, but as an avid online user, I can’t tear my eyes away. They pop up multiple times a day, each time making me want to help her. She is drenched in mud; she needs a shower, not just a rinse! Her predicament epitomizes the essence of a clown. I still haven’t downloaded the game, yet I feel responsible for her happiness each time. Let’s help her out!

4. Dianne Laurance’s @dumpedwifesrevenge Instagram Page

Do you allow Instagram content?

This article contains content provided by Instagram. You may be using cookies or other technologies, so you would need to ask for permission before anything is loaded. To view this content, Click “Get permission and continue.”.


Dianne Laurance faced abandonment by her husband after 26 years of marriage… for a younger man. How does she seek revenge? “By showcasing her appeal and flair,” naturally, all while documenting it on Instagram. I have a soft spot for outrageous women who need that starlight to shine. And her laughter slays me every single time.

5. Kermit Revealed as a Snail on The Masked Singer

I can envision The Masked Singer existing in a Hunger Games-style universe. All the clips seem like a glimpse into the Capitol from District 12. This particular reveal is my favorite. The performance is entertaining—the way they emerge, the audience’s reaction to the puppet. Picture Kermit’s puppeteer confined in a giant snail costume. While I don’t usually follow masked singers, if all contestants were Muppets… I might become a fan.

  • Rae Colquhoun-Fairweather and Will Bartolo, aka Raeandwill, are a performance duo based in Sydney. See Will Where to Hide the Stars. Watch Raeandwill perform their acclaimed shows at the Pier in Sydney from October 1st to 11th and at the Melbourne Fringe at the Meat Market from October 14th to 18th.

Source: www.theguardian.com

Exploring the Intersection of Memes, Gaming, and Internet Culture in Relation to Charlie Kirk’s Shooting

hGreetings from Ello and TechScape! Dara Kerr is here to fill in for Blake Montgomery, who is currently on vacation. In the meantime, I’m diving into the memes, games, and internet culture that surround Charlie Kirk’s recent filming.

The bullet that claimed the life of a conservative activist bore the inscription, “What will this inflate?” This quickly caught the attention of the online community. It’s a phrase often used in internet culture to poke fun at participants in online role-play communities, particularly within the fur fandom, where individuals dress up as anthropomorphic animal characters.

“The phrase is embraced by the fur community not just to tease them for being cringy, but also to claim ownership over memes,” he writes. Know your memes, a site that chronicles viral trends. “Ultimately, this phrase functions as a meme and is regarded as one of the most annoying things to say to someone else.”

Other bullet casings seized by law enforcement in Utah featured inscriptions that referenced online games and niche memes, igniting a wave of speculation on social media regarding the potential motives behind the murder. One casing read: “O Bella Ciao, Bella Ciao,” while another stated, “If you read this, you’re gay, Lmao.” The former connects to an Italian anti-fascist folk song, while the latter is described by web culture writer Ryan Broderick as “just a boilerplate edgy joke.” Last week’s newsletter carried the title, “Charlie Kirk was killed by a meme.”

The final bullet casing disclosed by law enforcement read, “Heyfascist! Catch!” followed by a series of arrow symbols. This sequence appears to allude to commands in the video game Helldivers 2 that are used to deploy 500kg bombs.

Suspect Tyler James Robinson, a 22-year-old from a small Utah town near the Arizona border, has been charged with Kirk’s murder at a campus event at Utah Valley University in Orem. Kirk was hit by a single bullet fired from a “powerful bolt-action rifle” from a distant rooftop.

Both the suspect and the 31-year-old victim, Charlie Kirk, were well-versed in online culture. Kirk was associated with Turning Point USA, a conservative youth organization, known for engaging in discussions about extremist views on race, immigration, gender identity, and gun rights. His rise to fame was primarily fueled by his strong online presence.

As my colleague Alaina Demopoulos wrote:

Kirk, a pivotal figure in Donald Trump’s rise, galvanized college conservatives who transitioned to a different ecosystem than mainstream media. Throughout the decade between Kirk’s emergence as a teenage activist and the shooting, he played a crucial role in the growth of MAGA politics alongside changes in the media landscape.

Founded in 2012, Turning Point USA aimed to redirect Obama-era youth outreach toward conservative values. Even adversaries of his views couldn’t disregard his significant presence in the political arena. For a young American viewer, Kirk represented a savvy figure across platforms like YouTube, Twitter, Tiktok, and live events—akin to a millennial and Gen Z version of Rush Limbaugh, the influential right-wing radio host of the 1990s.

You can read the full story here.

Photo: Peter Dasilva/Reuters

Recently, Meta faced allegations from two independent whistleblowers. One group of former and current employees claims that Meta’s virtual reality devices and apps are harming children. Another whistleblower, Attaullah Baig, who previously served as a security officer for Meta and WhatsApp, accuses the company of overlooking significant security and privacy issues within a messaging app, according to The New York Times.

In response to these VR device allegations, Meta spokesperson Dani Lever stated that the company has approved 180 studies related to VR since 2022. “Some of these examples are stitched together to fit a particular narrative and misrepresent the truth,” she asserted. Meta also emphasized having implemented features in its VR products to limit unwanted interactions and provide parental supervision tools.

Skip past newsletter promotions

One of the first whistleblowers, Sophie Chang, brought her findings to the Guardian in 2021. She documented how Facebook facilitated political manipulation across over 25 countries. Later that same year, Frances Haugen shared with the Wall Street Journal documentation examining various allegations made by Zhang, revealing Facebook’s awareness of the harm its social media apps posed to teenagers.

In 2023, Arturo Bejard also provided evidence to the Wall Street Journal, providing further proof that Meta recognized how Facebook and Instagram algorithms directed content to teenagers that amplified bullying, substance abuse, eating disorders, and self-harm.

This year alone, eight additional whistleblowers have stepped forward. Baig, alongside a group of six former employees, came forward last week.

U.S. lawmakers are taking these allegations seriously. Politicians such as Missouri Republican Sen. Josh Hawley and Connecticut Democrat Richard Blumenthal have expressed urgency in regulating Meta and other social media platforms.

“The revelations from these disclosures exhibit such significant risks to safety that it’s troubling. It shows that Meta is intentionally distorting the truth about abuse on the platform. ‘See no evil, hear no evil, speak no evil’ is more than just a business philosophy—it’s a troubling narrative,” stated Blumenthal, who also mentioned that he and other senators are eager to advocate for “long-overdue reforms.”

Wider technology

Source: www.theguardian.com

Review of “How to Save the Internet with Nick Clegg” – Unpacking Silicon Valley’s Impact on Technology

Nick Clegg takes on challenging positions. He served as the British Deputy Prime Minister from 2010 to 2015, navigating the complex dynamics between David Cameron’s Conservatives and his own Liberal Democrats. A few years later, he embraced another tough role as Vice President of Meta and President of Global Affairs from 2018 until January 2025. In this capacity, he managed the contrasting landscapes of Silicon Valley and Washington, D.C., as well as other governments. “How to Save the Internet” outlines Clegg’s approach to these demanding responsibilities and presents his vision for fostering a more collaborative and effective relationship between tech companies and regulators in the future.

The primary threats Clegg discusses in his book do not originate from the Internet; rather, they come in the form of regulatory actions against it. “The true aim of this book is not to safeguard myself, Meta, or major technologies. It is to enhance awareness about the future of the Internet and the potential benefits of these innovative technologies.”

However, much of the book focuses on defending Meta and large technology firms, beginning with a conflation of the widely beloved Internet with social media, which represents a more ambiguous aspect of online activity. In his exploration of “Techlash,” the swift public backlash against big tech occurring in the late 2010s, he poses the question:

That brings me to a recent survey I conducted through Harris Poll. I posed this question to a nationally representative sample of young American adults—the very generation that has been shaped by a plethora of social media platforms. We invited respondents to share their thoughts on the existence of various platforms and products. The regret for the existence of the Internet is low at 17%, while for smartphones, it’s only 21%. However, regret regarding major social media platforms is considerably higher, ranging from 34% for Instagram (owned by Meta) to 47% for TikTok and 50% for X. A parental investigation also found high levels of regret regarding social media. Similarly, other researchers have uncovered similar findings in their studies.

In other words, many of us would opt to disconnect from certain technologies if given the chance. Clegg presents this choice as binary: either fully embrace the Internet or shut it down. Yet, the real concern lies with social media, which can be regulated without dismantling the entire Internet and is consequently far more challenging to defend.

Nevertheless, Clegg attempts this defense. In the opening chapter, he addresses dual accusations that social media has harmed global democracy and adversely affected teenage mental health. While he acknowledges both have deteriorated since the 2010s, he contends that the decline merely coincides with the rise of social media and is not a direct cause. He refers to academic research, yet his interpretations echo standard narratives from Meta and overlook many critical counterarguments. For instance, consider this study contrasted with alternative perspectives. Ultimately, Clegg borrows many of his defensive phrases directly from a rebuttal published by Meta in response to criticisms, while my own work articulates a case for the detrimental impact of social media on democracy.

In this book, Clegg aligns himself with Meta’s narrative, despite previously holding different views on teenage mental health. Multiple state attorneys general in the U.S. have initiated lawsuits against Meta, revealing insights through obtained documents that show Clegg’s awareness of the issues. For instance, on August 27, 2021, Clegg sent an email to Mark Zuckerberg, prompted by an employee’s request for increased resources to address teenage mental health concerns. Clegg expressed that it was “increasingly urgent” to tackle “issues concerning the impact of products on the mental health of young people,” indicating that the company’s efforts were hampered by staffing shortages. Zuckerberg, however, did not respond to this email.

Clegg’s current stance—that harm is merely correlational and that such correlations lack significance—contradicts the experiences of numerous Meta employee, contractor, whistleblower, and leaked document evidence. One example comes from a 2019 Meta-offered study commissioned by the Tennessee Attorney General, where researchers informed Meta: “[teens] Despite Instagram’s addictive nature and detrimental effects on mental health, it’s still irresistible.”

Regarding his suggestions for preserving the Internet, Clegg proposes two key principles: radical transparency and collaboration. He advocates for tech companies to be more open about how their algorithms function and how decisions are made. He warns: “If the Silicon Valley Master refrains from opening up, external forces will intervene.”

In terms of collaboration, he advocates for a “digital democratic alliance,” emphasizing the importance of providing a counter to China’s technology, which supports its authoritarian regime. Clegg envisions that world democracies should unite to ensure the Internet upholds the democratic ideals prevalent in the 1990s.

Does Clegg’s vision hold merit? While transparency is commendable in theory, it may be too late to enforce these principles on the currently dominant companies of the Internet. As tech journalist Kara Swisher articulated, we built cities without infrastructure—no sanitation, no law enforcement, no guidance. Envision such a city. This lack of foundational design allows fraudsters, extremists, and others to thrive on these platforms, posing risks that even teenagers and large enterprises doubt can be addressed. A leap towards transparency by 2026 may prove insufficient to rectify the detrimental frameworks established two decades ago.


As for collaboration, envisioning a corporation like Meta relinquishing data and control seems implausible. The tech giant has garnered considerable support from the Trump administration, raising doubts about their willingness to pressure other nations. Thus, it remains unclear how “the choice will be taken out of their hands” should they resist cooperation. By whom?

The great biologist and ant expert, E.O. Wilson, once remarked that Marxism is “a good ideology for the wrong species.” After engaging with Clegg’s proposals, one might draw a parallel; his suggestions overlook the many critiques found in books addressing Meta’s unethical practices, numerous revelations from the 2021 leak known as the Facebook Files, and ongoing legal challenges.

Jonathan Haidt is a social psychologist and author of “The Unreliable Generation” (Penguin). How to Save the Internet: The Threat to Global Connections in the Age of AI and Political Conflicts by Nick Clegg is published by Bodley Head (£25). To support the Guardian, purchase a copy at Guardianbookshop.com. Shipping charges may apply.

Source: www.theguardian.com

Review of “Internet Storage”: Nick Clegg’s New Tech Book Lacks Substance

Nick Clegg, vice president of Global Affairs and Communications at Meta, speaks via web broadcast from the Altice Arena during the 2021 Web Summit in Lisbon, Portugal, attended by approximately 40,000 participants. (Credit: Hugo Amaral/SOPA Images via Zuma Press Wire)

At the time, Nick Clegg, an executive at Meta, addressed the Technology Summit in Portugal in 2021.

Hugo Amaral/SOPA Images via Zuma Press Wire/Alamy

How to Save the Internet
Nick Clegg (Bodley Head (UK, Now, US, November 11))

There were moments when my brain struggled to engage with Nick Clegg’s new book, How to Save the Internet.

After a dull depiction of future families benefiting from artificial intelligence, I found myself flipping to page 131, encountering lengthy quoted segments, first from a Massachusetts tech professor, followed by an excerpt from an NPR article. Overwhelmed by monotony, I had to set the book aside.

However, Clegg, a former executive at Facebook’s parent company Meta and UK’s deputy prime minister from 2010 to 2015, prompted me to revisit it, sensing that valuable insights awaited.

During his tenure, Clegg experienced pivotal moments at Meta, including the two-year suspension of Donald Trump in 2021. His reflections on Meta’s policies are revealing. Despite rising authoritarianism, How to Save the Internet posits that Big Tech is responsible for shaping our online realities.

Yet, wisdom is scarce throughout the book, which is littered with passages from other journalists and researchers. When Clegg does offer his perspective, it often comes across as uninspired and bland: “If businesses can enhance productivity during work hours and glean insights swiftly, it will promote efficiency.” Hardly thrilling.

The book’s concluding chapter, where Clegg presents his grand vision to “save the Internet,” is equally underwhelming, claiming that the US should avoid business as usual while the Chinese AI model Deepseek caused significant market turmoil. He suggests a global agreement to counteract China, but fails to dive deeply into the implications.

What struck me more was Clegg’s explanation of Meta’s response after supporters of Trump stormed the U.S. Capitol, resulting in a presidential ban. CEO Mark Zuckerberg allowed Clegg to make a crucial decision regarding the suspension. This was significant for private firms, yet the process seemed unclear. We were informed of the events, but left without a thorough understanding.

Given Clegg’s background, I’m left wondering why the book lacks a lasting impact. His experiences as a politician and tech executive are evident, yet he shares little of himself, which diminishes engagement with his audience. Questions surrounding AI’s socioeconomic implications and its potential to deepen inequality are posed but left unanswered.

The core issue with How to Save the Internet is its failure to convey substantial ideas. Politicians often shy away from firm stances. The Internet’s origins stretch back to military ARPANET, AI lacks true intelligence, and while social media connects us, it also leads to toxicity.

This reads more like a post-dinner speech or a polished think tank report, adorned with flashy aesthetics. If you’re interested in saving the Internet, proceed with caution.

Chris Stokell Walker is a technology writer based in Newcastle upon Tyne, UK.

New Scientist Book Club

Enjoy reading? Join a welcoming community of book enthusiasts. Every six weeks, we delve into exciting new titles, offering members exclusive access to excerpts, articles from authors, and video interviews.

Topics:

Source: www.newscientist.com

Clanker! Exploring the Aggressiveness of This Robot Slur on the Internet | Artificial Intelligence (AI)

Name: Clanker.

Age: 20 years old.

Presence: Everywhere, particularly on social media.

That seems somewhat derogatory. Indeed, it’s considered a slur.

What type of slur? A slur targeting robots.

Is it because they are made of metal? Yes, it’s often used to insult actual robots like delivery bots and autonomous vehicles, but it increasingly targets platforms like AI chatbots and ChatGPT.

I’m not familiar with this – why would I want to belittle AI? For information creation, they either promote utterly false narratives and generate “slops” (meaning glitter or clearly unfounded content), or simply lack human qualities.

Does AI care about being insulted? It’s a complex philosophical issue, and the consensus is “no.”

So why does it matter? People feel frustrated with technology that can become widespread and potentially disrupt job markets.

Come here and let Crancous take over our responsibilities! That’s the notion.

Where did this slur originate? It was first used in the 2005 Star Wars game to describe PE Jor’s fight against Androids, but Clanker gained popularity through the Clone Wars TV series. It then spread to platforms like Reddit, memes, and TikTok.

Is that truly the best we can do? Popular culture has birthed other anti-robot slurs. There’s “Toaster” from Battlestar Galactica and “Skin Job” from Blade Runner, but “Clanker” seems to have taken the lead for now.

It seems like a frivolous waste of time, but I suppose it’s largely harmless. You might think so, yet it implies that using “clankers” could normalize real bias.

Oh, come on. Popular memes and parody videos often equate “clankers” to racial slurs.

So what? They’re just clankers. “This inclination to use such terms reveals more about our insecurities than about the technology itself,” says linguist Adam Alexick.

I haven’t. Anti-robot; I wouldn’t want to marry my daughter. Can you hear how that sounds?

I feel like I’ll be quite embarrassed about all this in ten years. Probably. Some argue that by mocking AI, we risk elevating it to a human level that isn’t guaranteed.

That’s definitely my view. However, “Roko’s Basilisk” suggests that future AI could punish those who didn’t help them thrive initially.

I believe it’s vital to label it a Clanker. We might find ourselves apologizing to robot overlords for past injustices.

Will they find humor in this? Perhaps one day Clanker will have a sense of humor about it.

Say: “This desire to create a slur reflects more on our insecurities than the technology itself.”

Don’t say: “Some of my best friends are Clankers.”

Source: www.theguardian.com

The US Military Aims to Enhance Internet Security Through Quantum Technology.

SEI 262862112

Can we add quantum to the internet to enhance safety?

Nicolinino / Aramie

The U.S. military has initiated a program aimed at enhancing traditional communication infrastructures to improve the security of quantum devices and the information shared over the Internet.

Quantum networks utilize the quantum states of particles for information sharing, thereby ensuring high security. For instance, the messages linked to these quantum states cannot be copied without detection due to inherent quantum properties. Consequently, numerous quantum communication networks have already been established globally.

However, the development of a fully functional quantum internet remains restricted due to various unresolved technological challenges. Instead of awaiting the resolution of these issues, the U.S. Defense Advanced Research Projects Agency (DARPA) has propelled a program focused on uncovering the immediate advantages of integrating quantum technologies into existing communication networks.

The agency emphasizes its goal of pinpointing practical and beneficial quantum enhancements available in the short term. Allison O’Brien, DARPA Program Manager of the Quantum Organised Network (Quanet) initiative, remarks, “We can’t convert everything from classical to quantum.”

In August, the Quanet team participated in a Hackathon, culminating in a tangible demonstration. Light was placed into a specific quantum state that successfully transmitted images, including the DARPA logo and simple cat graphics. This initial trial of the quantum-enhanced network achieved sufficient bitrate to stream high-resolution videos.

O’Brien indicates that the quantum state demonstrated is just one example of the multitude of quantum properties the Quanet initiative is investigating. Researchers are also delving into “hyperparting,” where multiple light properties are simultaneously linked through the complex nature of quantum entanglement. Initial mathematical models suggest this could allow for the encoding of more secure data within fewer optical signals, optimizing resource use within quantum networks.

Meanwhile, the team is exploring the prospect of generating light with certain quantum-like characteristics, but without fully altering the physical properties at a fundamental level.

Furthermore, Quanet researchers are designing quantum network interface cards that integrate with communication devices to facilitate the transmission and reception of quantum signals.

Numerous questions remain concerning the practical utility of these innovations, including optimal deployment stages and network design levels. However, O’Brien reassures that Quanet is uniting experts in quantum physics, electrical engineering, and networking to comprehensively address these inquiries.

“Quantum networks are not designed to be a universal solution.” states Joseph Lukens from Purdue University, Indiana. They excel in specific tasks, and performing them effectively necessitates some conventional networking components. “The future lies in the automatic integration of quantum networks with traditional ones,” Lukens asserts. He believes that initiatives like Quanet are valuable, despite the numerous questions we still face regarding the potential enhancement of our well-established internet infrastructure.

If this program successfully devises a means for users to activate an ultra-secure “quantum mode” on their devices, it will mark a significant achievement. In that scenario, we could all benefit from these advancements without needing to understand the complexities of quantum physics, says Lukens.

topic:

Source: www.newscientist.com

Internet Access Should Be Recognized as a Fundamental Human Right

In 2024, 2.6 billion people (nearly a third of the global population) were still offline, as reported by
The International Telecommunication Union (ITU). That same year,
Freedom House estimated that over three-quarters of those with internet access live in countries where individuals have been arrested for sharing political, social, or religious content online, with nearly two-thirds of global internet users experiencing some form of online censorship.

The accessibility and quality of internet connections significantly impact how individuals lead their lives, a fact that deserves serious consideration. Having free and unobstructed internet access is no longer merely a luxury.

Human rights ensure a baseline of decent living conditions, as established by the UN General Assembly in the 1948 Declaration. In today’s digital landscape, the exercise of these rights—ranging from free speech to access to primary education—depends heavily on internet connectivity. For instance, many essential public services are transitioning online, and in several areas, digital services are the most viable alternatives to the absence of physical banks, educational institutions, and healthcare facilities.

Given the critical significance of internet access today, it must be officially recognized as a standalone human right by the United Nations and national governments. Such recognition would provide legal backing and obligations for international support that are often missing at the state level.

The ITU projects that achieving universal broadband coverage by 2030 will require an investment of nearly $428 billion. While this is a substantial sum, the benefits of connecting the remaining portion of humanity—enhanced education, economic activity, and health outcomes—far outweigh the costs.

Ensuring a minimum standard of connectivity is already an attainable goal. This includes providing 4G mobile broadband coverage, consistent access to smartphones, and affordable data plans for individuals that cost less than 2% of the average national income for 2GB per person, along with opportunities to develop essential digital skills.

However, having internet access alone is not sufficient for upholding human rights. As highlighted by the United Nations, misuse of technology for monitoring populations, gathering personal data for profit maximization, or spreading misinformation constitutes oppression rather than empowerment.

This right entails that states should respect users’ privacy, opposing censorship and the manipulation of information online. Businesses should prioritize human rights, especially users’ privacy, and actively combat misinformation and abuse on their platforms in line with regulations governing social media.

In 2016, the United Nations affirmed that people must be protected online just as they are offline. This concept was first suggested in
2003.

The time to act is now. Advocating for universal internet access as a human right calls for political action. We cannot afford to see the internet degrade from a tool for human advancement to one of division. Establishing this right will be a powerful measure to ensure that the internet serves the interests of all, not just a select few.

Merten Reglitz is a philosopher and author of Free Internet Access as a Human Right

Topic:

Source: www.newscientist.com

Social Media Continues to Promote Suicide-Related Content to Teens Despite New UK Safety Regulations

Social media platforms continue to disseminate content related to depression, suicide, and self-harm among teenagers, despite the introduction of new online safety regulations designed to safeguard children.

The Molly Rose Foundation created a fake account pretending to be a 15-year-old girl and interacted with posts concerning suicide, self-harm, and depression. This led to the algorithm promoting accounts filled with a “tsunami of harmful content on Instagram reels and TikTok pages,” as detailed in the charity’s analysis.

An alarming 97% of recommended videos viewed on Instagram reels and 96% on TikTok were found to be harmful. Furthermore, over half (55%) of TikTok’s harmful recommended posts included references to suicide and self-harm, while 16% contained protective references to users.

These harmful posts garnered substantial viewership. One particularly damaging video was liked over 1 million times on TikTok’s For You Page, and on Instagram reels, one in five harmful recommended videos received over 250,000 likes.

Andy Burrows, CEO of The Molly Rose Foundation, stated: “Persistent algorithms continue to bombard teenagers with dangerous levels of harmful content. This is occurring on a massive scale on the most popular platforms among young users.”

“In the two years since our last study, it is shocking that the magnitude of harm has not been adequately addressed, and that risks have been actively exacerbated on TikTok.

“The measures instituted by Ofcom to mitigate algorithmic harms are, at best, temporary solutions and are insufficient to prevent preventable damage. It is crucial for governments and regulators to take decisive action to implement stronger regulations that platforms cannot overlook.”

Researchers examining platform content from November 2024 to March 2025 discovered that while both platforms permitted teenagers to provide negative feedback on content, as required by Ofcom under the online safety law, this function also allowed for positive feedback on the same material.

The Foundation’s Report, developed in conjunction with Bright Data, indicates that while the platform has made strides to complicate the use of hashtags for searching hazardous content, it still amplifies harmful material through personalized AI recommendation systems once monitored. The report further observed that platforms often utilize overly broad definitions of harm.

This study provided evidence linking exposure to harmful online content with increased risks of suicide and self-harm.

Additionally, it was found that social media platforms profited from advertisements placed next to numerous harmful posts, including those from fashion and fast food brands popular among teenagers as well as UK universities.


Ofcom has initiated the implementation of child safety codes in accordance with online safety laws aimed at “taming toxic algorithms.” The Molly Rose Foundation, which receives funding from META, expresses concern that regulators propose a mere £80,000 for these improvements.

A spokesperson for Ofcom stated, “Changes are underway. Since this study was conducted, new measures have been introduced to enhance online safety for children. These will make a significant difference, helping to prevent exposure to the most harmful content, including materials related to suicide and self-harm.”

Technology Secretary Peter Kyle mentioned that 45 sites have been under investigation since the enactment of the online safety law. “Ofcom is also exploring ways to strengthen existing measures, such as employing proactive technologies to protect children from self-harm and recommending that platforms enhance their algorithmic safety,” he added.

A TikTok spokesperson commented: “TikTok accounts for teenagers come equipped with over 50 safety features and settings that allow for self-expression, discovery, and learning while ensuring safety. Parents can further customize content and privacy settings for their teens through family pairing.”

A Meta spokesperson stated: “I dispute the claims made in this report, citing its limited methodology.

“Millions of teenagers currently use Instagram’s teenage accounts, which offer built-in protections that limit who can contact them, the content they can see, and their time spent on Instagram. Our efforts to utilize automated technology continue in order to remove content that promotes suicide and self-harm.”

Source: www.theguardian.com

AOL to Terminate Dial-Up Internet Service After 30 Years: The End of an Era | US News

With the shutdown of AOL’s dial-up internet in late September, the iconic sounds, symbols, and experiences that ushered millions of Americans into the early digital age will come to an end.

AOL, or America Online, announced recently that it has evaluated its products and services and will discontinue dial-up connectivity options, ceasing support for its dial-up software as of September 30th.

These dates signal the end of an era for countless Americans from various generations: millennials, Gen X, Baby Boomers, and beyond. The familiar sounds of modems establishing connections and the excitement of getting online marked the dawn of a new era filled with wires, computer mice, emails, chat rooms, instant messaging, and the bright allure of digital screens.

Dial-up internet didn’t emerge in isolation; it was developed by Usenet in the late 1970s.

In 1979, Compuserve became the first to offer “Dial-Up Online Information Services to Consumers.”

By the mid-1980s, virtual communities started to emerge with platforms like The Well, which was founded in the Bay Area by Stewart Brand and Larry Brilliant, coinciding with the founding of America Online in 1985.

At its peak, in the late 1990s and early 2000s, AOL boasted over 23 million subscribers in the United States, solidifying its status as the leading internet service provider of that era. As noted by Jigso AI, new users were acquired approximately every six seconds.

AOL became a household name with its distinct “You’ve got mail!” notification, but it also became infamous in 1999 after a controversial merger, which is often viewed as one of the most disastrous deals in media history.

Gradually, the iconic sounds of dial-up began to fade as faster cable internet services emerged in 1995, leveraging existing cable television infrastructure.

Today, only a small fraction of U.S. households (around 175,000) still depend on dial-up internet access. This legacy technology stems from the intense rivalry between Microsoft and Netscape in the 1980s and 90s. As AI encroaches upon browsing, the days of dial-up seem ever more distant.

The rise of dial-up internet was partially fueled by demand for adult content, and its decline is now seen as part of the nostalgic farewell to other bygone pop culture artifacts, such as CDs, pagers, and landlines.

Source: www.theguardian.com

Transatlantic Social Media Clash: Impact of UK Online Safety Laws on Internet Safety

The UK’s new online safety laws are generating considerable attention. As worries intensify about the accessibility of harmful online content, regulations have been instituted to hold social media platforms accountable.

However, just days after their implementation, novel strategies for ensuring children’s safety online have sparked discussions in both the UK and the US.

Recently, Nigel Farage, leader of the Populist Reformed British Party, found himself in a heated exchange with the government’s Minister of Labour after announcing his intent to repeal the law.

In parallel, Republicans convened with British lawmakers and the communications regulator Ofcom. The ramifications of the new law are also keenly observed in Australia, where plans are afoot to prohibit social media usage for those under 16.

Experts note that the law embodies a tension between swiftly eliminating harmful content and preserving freedom of speech.

Senior Reformer Zia Yusuf stated:

Responding to criticisms of UK legislation, technical secretary Peter Kyle remarked, “If individuals like Jimmy Saville were alive today, they would still commit crimes online, and Nigel Farage claims to be on their side.”

Kyle referred to measures in the law that would help shield children from grooming via messaging apps. Farage condemned the technical secretary’s comments as “unpleasant” and demanded an apology, which is unlikely to be forthcoming.

“It’s below the belt to suggest they’ll do anything to assist individuals like Jimmy Saville while causing harm,” Farage added.

The UK’s rights are not the only concerns raised about the law. US Vice President JD Vance claimed that freedom of speech in the UK is “retreating.” Last week, Republican Rep. Jim Jordan, who criticized the legislation, led a group of US lawmakers in discussions with Kyle and Ofcom regarding the law.

Jordan labeled the law as “UK online censorship legislation” and criticized Ofcom for imposing regulations that “target” and “harass” American companies. A bipartisan delegation also visited Brussels to explore the Digital Services Act, the EU’s counterpart to the online safety law.

Scott Fitzgerald, a Republican member of the delegation, noted the White House would be keen to hear the group’s findings.

Worries from the Trump administration have even led to threats against OFCOM and EU personnel concerning visa restrictions. In May, the State Department announced it would block entry to the US for “foreigners censoring Americans.” Ofcom has expressed a desire for “clarity” regarding planned visa restrictions.

The intersection of free speech concerns with economic interests is notable. Major tech platforms including Google, YouTube, Facebook, Instagram, WhatsApp, Snapchat, and X are all based in the US and may face fines of up to £18 million or 10% of global revenue for violations. For Meta, the parent company of Instagram, Facebook, and WhatsApp, this could result in fines reaching $16 billion (£11 billion).

On Friday, X, the social media platform owned by self-proclaimed free speech advocate Elon Musk, issued a statement opposing the law, warning that it could “seriously infringe” on free speech.

Signs of public backlash are evident in the UK. A petition calling for the law’s repeal has garnered over 480,000 signatures, making it eligible for consideration in Congress, and was shared on social media by far-right activist Tommy Robinson.

Tim Bale, a political professor at Queen Mary University in London, is skeptical about the law being a major voting issue.

“No petition or protest has significant traction for most people. While this resonates strongly with those online—on both the right and left—it won’t sway a large portion of the general populace,” he said.

According to a recent Ipsos Mori poll, three out of four UK parents are worried about their children’s online activities.

Beavan Kidron, a British fellow and prominent advocate for online child safety, shared with the Guardian that he is “more than willing to engage Nigel Farage and his colleagues on this issue.”

Skip past newsletter promotions

“If companies focus on targeting algorithms toward children, why would reforms place them in the hands of Big Tech?”

The UK’s new Under-18 guidelines, which prompted the latest legislation, mandate age verification on adult sites to prevent underage access. However, there are also measures to protect children from content that endorses suicide, self-harm, and eating disorders, as well as curtail the circulation of materials that incite hatred or promote harmful substances and dangerous challenges.

Some content falls within age appropriateness to avoid being flagged as violating these regulations. In an article by the Daily Telegraph, Farage alleged that footage of anti-immigrant protests was not only “censored” but also related to the Rotherham Grooming Gang scandal.

These instances were observed on X, which flagged a speech by Conservative MP Katie Lamb regarding the UK’s child grooming scandal. The content was labeled with a notice stating, “local laws temporarily restrict access to this content until X verifies the user’s age.” The Guardian could not access the Age Verification Service on X, suggesting that, until age checks are fully operational, the platform defaults many users to a child-friendly experience.

X was contacted for commentary regarding age checks.

On Reddit, the Alcohol Abuse Forum and the Pet Care subforum will implement age checks before granting access. A Reddit spokesperson confirmed that this age check is enforced under the online safety law to limit content that is illegal or harmful to users under the age of 18.

Big Brother Watch, an organization focused on civil liberties and privacy, noted that examples from Reddit and X exemplify the overreach of new legislation.

An Ofcom representative stated that the law aims to protect children from harmful and criminal content while simultaneously safeguarding free speech. “There is no necessity to limit legal content accessible to adult users.”

Mark Jones, a partner at London-based law firm Payne Hicks Beach, cautioned that social media platforms might overly censor legitimate content due to compliance concerns, jeopardizing their obligations to remove illegal material or content detrimental to children.

He added that the regulations surrounding Ofcom’s content handling are likely to manifest as actionable and enforceable due to the pressure to quickly address harmful content while respecting freedom of speech principles.

“To effectively curb the spread of harmful or illegal content, decisions must be made promptly; however, the urgency can lead to incorrect choices. Such is the reality we face.

The latest initiatives from the online safety law are only the beginning.

Source: www.theguardian.com

UK Online Safety Law Poses a Threat to Free Speech and Internet Safety

Elon Musk’s platform, X, has warned that the UK’s Online Safety Act (OSA) may “seriously infringe” on free speech due to its measures aimed at shielding children from harmful content.

The social media company noted that the law’s ostensibly protective aims are marred by the aggressive enforcement tactics of Communications Watchdog Ofcom.

In a statement shared on its platform, X remarked: “Many individuals are worried that initiatives designed to safeguard children could lead to significant violations of their freedom of expression.”

It further stated that the UK government was likely aware of the risks, having made “conscious decisions” to enhance censorship under the guise of “online safety.”

“It is reasonable to question if British citizens are also aware of the trade-offs being made,” the statement added.

The law, a point of contention politically on both sides of the Atlantic, is facing renewed scrutiny following the implementation of new restrictions on July 25th regarding access to pornography for those under 18 and content deemed harmful to minors.

Musk, who owns X, labeled the law as an “oppression of people” shortly after the enactment of the new rules. He also retweeted a petition advocating for the repeal of the law, which has garnered over 450,000 signatures.

X found itself compelled to establish age restrictions for certain content. In response, the Reformed British Party joined the outcry, pledging to abolish the act. This commitment led British technology secretary Peter Kyle to accuse Nigel Farage of aligning himself with pedophile Jimmy Saville, prompting Farage to describe the comments as “under the belt” and deserving of an apology.

Regarding Ofcom, X claimed that the regulators are employing “heavy-handed” tactics in implementing the act, characterized by “a rapid increase in enforcement resources” and “additional layers of bureaucratic surveillance.”

The statement warned: “The commendable intentions of this law risk being overshadowed by the expansiveness of its regulatory scope. A more balanced and collaborative approach is essential to prevent undermining free speech.”

While X aims to comply with the law, the threat of enforcement and penalties—potentially reaching 10% of global sales for social media platforms like X—could lead to increased censorship of legitimate content to avoid repercussions.

The statement also referred to plans for a National Internet Intelligence Research Team intended to monitor social media for indications of anti-migrant sentiments. While X suggested the proposal could be framed as a safety measure, it asserted that it “clearly extends far beyond that intention.”

Skip past newsletter promotions

“This development has raised alarms among free speech advocates, who characterize it as excessively restrictive. A balanced approach is essential for safeguarding individual freedoms, fostering innovation, and protecting children.”

A representative from Ofcom stated that the OSA includes provisions to uphold free speech.

They asserted: “Technology companies must address criminal content and ensure children do not access defined types of harmful material without needing to restrict legal content for adult users.”

The UK Department of Science, Innovation and Technology has been approached for comment.

Source: www.theguardian.com

UK Online Safety Law Requires Porn Sites to Implement 5 Million Daily Age Checks | Internet Safety

Recent statistics indicate that since the implementation of age verification for pornographic websites, the UK is conducting an additional five million online age checks daily.

The Association of Age Verification Providers (AVPA) reported a significant increase in age checks across the UK since Friday, coinciding with the enforcement of mandatory age verification under the Online Safety Act.

“We are thrilled to assist you in maximizing your business potential,” remarked Iain Corby, executive director of AVPA.

In the UK, the use of virtual private networks (VPNs), which allow users to bypass restrictions on blocked sites, is rapidly increasing as they mask users’ actual locations. Four of the top five free applications in the UK Apple Download Store are VPNs, with popular provider Proton reporting an astonishing 1,800% surge in downloads.

Last week, Ofcom, the UK communications regulator, indicated it may initiate a formal inquiry into the inadequate age checks reported this week. Ofcom stated it will actively monitor compliance with age verification requirements and may investigate specific services as needed.

AVPA, the industry association representing UK age verification companies, has been assessing the checks performed on UK porn providers, which were mandated to implement “very effective” age verification by July 25th.

Companies that verified ages were instructed to report “the number of checks conducted today for a very effective age guarantee.”

While the AVPA stated it couldn’t provide a baseline for comparison, it noted that effective age verification measures are newly introduced to dedicated UK porn sites, which previously only required a confirmation check for age.

An Ofcom spokesperson said: “Until now, children could easily stumble upon pornographic and other online content without seeking it out. Age checks are essential to prevent that. We must ensure platforms are adhering to these requirements and anticipate enforcement actions against non-compliant companies.”

Ofcom stresses that service providers should not promote the use of VPNs to circumvent age management.

Penalties for breaching online safety regulations, including insufficient age verification processes, can range from 10% of global revenue to complete blockage of the site’s access in severe cases.

Age verification methods endorsed by OFCOM and utilized by AVPA members include facial age estimation, which analyses a person’s age via live photos and videos; verification through credit card providers, banks, or mobile network operators; photo ID matching, where a user’s ID is compared to a selfie; and a “digital identity wallet” containing age verification proof.

Prominent pornographic platforms, including Pornhub, the UK’s leading porn site, have pledged to adopt the stringent age verification measures mandated by the Act.

The law compels sites and applications to protect children from various harmful content, specifically material that encourages suicide, self-harm, and eating disorders. Advanced platforms must also take action to prevent the dissemination of abusive content targeting individuals with characteristics protected under equality laws, such as age, race, and gender.

Free speech advocates argue that the restrictions on child-related content have caused the classification of X-rated materials to age unnecessarily, along with several Reddit forums dedicated to discussions around alcohol abuse.

Reddit and X have been approached for their feedback.

Source: www.theguardian.com

Should We Preserve the Pre-AI Internet Before It’s Contaminated?

SEI 259648162

Wikipedia already shows signs of huge AI input

Serene Lee/Sopa Images/Lightrocket via Getty Images

The emergence of AI chatbots introduces a significant turning point, suggesting that online content is increasingly unreliable in terms of human creation. How do people reflect on this transformation? Some are urgently striving to preserve “pure” data from the pre-AI period, while others advocate for documenting AI’s own contributions, enabling future historians to analyze the evolution of chatbots.

Rajiv Pant, an entrepreneur and former chief technology officer, notes in the New York Times and Wall Street Journal that he views AI as a potential risk to information integrity, particularly concerning news articles that constitute historical records. “Since the launch of ChatGPT, we’ve been grappling with this issue of ‘digital archaeology’, which is becoming increasingly pressing,” Pant remarks. “Currently, there’s no dependable way to differentiate between human-created content and that generated by large AI systems. This is a concern that extends beyond academia; it affects journalism, legal clarity, and scientific discovery.”

For John Graham-Cumming of cybersecurity company CloudFlare, data generated post-ChatGPT is akin to low-background steel, prized for its application in sensitive scientific and medical devices, devoid of residual radioactive contamination from the Atomic Age that disrupts measurements.

Graham-Cumming has established a website, Lowbackgroundsteel.ai, which has already demonstrated that Wikipedia reflects the impacts of AI contributions, aiming to archive data sources lacking AI contamination, such as the complete Wikipedia archive from August 2022.

“There were times we handled everything manually, but eventually, this process became significantly augmented by chat systems,” he explains. “You can view this as a type of pollution, or positively, as a way for humanity to advance with assistance.”

Mark Graham, who operates the Wayback Machine on the Internet Archive—an initiative that has been documenting the public Internet since 1996—expresses skepticism regarding the effectiveness of new data archiving initiatives, especially since the Internet Archive captures up to 160 terabytes of new information daily.

Graham aspires to develop a repository of AI outputs for researchers and historians in the future. He plans to pose 1,000 local questions each day and record the chatbot’s responses, even leveraging AI for this extensive task. This method helps document the evolving outputs of AI for future human inquiry.

“You ask a specific question, receive an answer, and the next day, you can re-ask the same question to receive a potentially different response,” Graham comments.

Graham-Cumming emphasizes he is not against AI; instead, he believes preserving human-generated content can actually enhance AI models. This is crucial since subpar AI outputs may harm the training of new models, leading to “model collapse.” Preventing this occurrence is a worthy endeavor, he asserts.

“At some point, one of these AIs is bound to contemplate concepts that humans haven’t considered. It will prove a mathematical theorem or innovate something entirely new.”

Topic:

Source: www.newscientist.com

Explore Your Face Age and ID: The Upcoming Transformation of Internet Use in Australia

A
as the saying goes, “On the Internet, Nobody knows you’re a dog.” Yet in Australia, various platforms—from search engines to social media and app stores—may require confirmation of your age.

The Albanese government proudly announced the introduction of a law that prohibits under-16s from using social media, set to take effect in December. However, the new industry code created in collaboration with high-tech experts and eSafety Commissioner Julie Inman Grant may significantly influence how Australians navigate online.

Online services are implementing measures such as reviewing your account history, utilizing facial recognition age verification, and verifying age via bank cards. Identification documents, including driver licenses, will also be used to ensure compliance with the industry code effective since late June, and applicable to search engine logins starting in December.

The code mandates search engines to guarantee the age of all users. If an account holder is identified as under 18, the secure search function will activate, blocking access to adult content and other unsuitable material in search results.



Additionally, six more draft codes under consideration by the eSafety Commissioner will enforce similar age verification measures across various services regularly used by Australians.

Platforms that host or facilitate access to content like pornography, self-harming material, simulated violence, or any highly inappropriate content for minors must implement restrictions to prevent child access.

Last month, Inman Grant addressed the National Press Club, emphasizing the necessity for regulations to ensure child safety in all online spaces.

“It is vital to adopt a layered safety strategy that assigns responsibility and accountability to key chokepoints within the technology stack, such as app stores and device levels.”

The eSafety Commissioner previously announced intent behind the code during its development stage. Recent news coverage has renewed focus on its critical elements.

Some individuals welcome these changes. Recent reports indicate that Elon Musk’s AI Grok has integrated pornographic chat features. While Apple’s App Store is rated for ages 12 and up, advocates urge child safety organizations to reevaluate Apple’s ratings and enhance protective measures within its platform.

Both Apple and Google have begun implementing age verification at the device level, and apps may also be utilized to assess user age.






The app store has a “giant interference” to remove porn for profit.


Justin Warren, founder of Pivotnine, a tech analysis firm, commented that the code represents a significant shift in communication regulations among Australians.

“It seems like a considerable overreaction following years of policy stagnation regarding the influence of major foreign tech companies,” he stated.

“It’s darkly amusing that more authority over Australians’ online experiences will be handed to those same foreign tech giants.”

Digi, an industry organization collaborating with eSafety Commissioners to establish the code, has opposed the idea of diminishing online anonymity, clarifying that the code targets specific platforms that handle or grant access to certain content.

“The Code introduces proportionate safeguards for accessing pornography and materials considered inappropriate for users under 18, such as highly violent content,” remarked Dr. Jenny Duxbury, Director of Digital Policy at Digi.


Skip past newsletter promotions

“These codes offer protective measures for specific circumstances rather than blanket identity verification requirements across the Internet.”

Duxbury noted that companies could utilize inference methods like account history and usage patterns to approximate users’ ages.

“Some services might opt for reasoning methods since they are effective and unobtrusive.”

However, those who attempt to implement such changes may be caught off guard, cautioned John Payne, chairman of Australia’s Electronic Frontier.

“It seems that many Australians are aware of the discussions around social media, but that’s not the case for the average person, especially when they’ll need to authenticate to access content rated for those over 18.”

Failure to adhere to the code could result in hefty penalties, including fines up to $49.5 million or social media bans. Further consequences may entail delisting from search results for non-compliant websites.

Payne advocates for introducing AI regulations that would prompt the federal government to reform privacy laws and enforce risk assessments for certain AI functions deemed as unacceptable risks.

He stresses the importance of legislating user care obligations for all digital service platforms.

“We believe this strategy would be more effective than relying solely on regulatory mandates,” he asserted.

Warren expressed skepticism, emphasizing that age verification technologies are effective, and highlighting that search engine codes were raised prior to the outcomes of the recent government review.

“Ultimately, theoretical applications must align with practical implementations.”

In response to a recent media report concerning the code, the eSafety Commissioner’s Office defended the age verification requirements for search engines.

“The sector’s code represents a critical opportunity to establish important safeguards, as search engines are key gateways for children to potentially harmful content,” stated the office.

Source: www.theguardian.com

Wetransfer Assures Users Their Content Won’t Fuel AI Training Following Backlash | Internet

The well-known FileSharing Service Wetransfer has clarified that user content will not be used for training artificial intelligence, following a backlash over recent changes to their terms of service.

The company, widely utilized by creative professionals for online work transfers, had suggested in the updated terms that uploaded files might be utilized to “enhance machine learning models.”

The initial provision indicated that the service reserved the right to “reproduce, modify, distribute, publish” user content, leading to confusion with the revised wording.

A spokesperson for Wetransfer stated that user content has never been utilized internally for testing or developing AI models and mentioned that “specific types of AI” are being considered for use by companies in the Netherlands.

The company assured, “There is no change in how Wetransfer handles content.”

On Tuesday, Wetransfer updated its terms and conditions, eliminating references to machine learning or AI to clarify the language for users.

The spokesperson noted, “We hope that by removing the machine learning reference and refining the legal terminology, we can alleviate customer concerns regarding the updates.”

Currently, the relevant section of the Service Terminology states, “We hereby grant you a royalty-free license to utilize our content for the purpose of operating, developing, and enhancing the service in accordance with our Privacy and Cookie Policy.”

Some service users, including a voice actor, a filmmaker, and a journalist, shared concerns about the new terms on X and threatened to terminate their subscriptions.

The use of copyrighted material by AI companies has become a contentious issue within the creative industry, which argues that using creators’ work without permission jeopardizes their income and aids in the development of competing tools.

The British Writer’s Guild expressed relief at Wetransfer’s clarification, emphasizing, “Never use members’ work to train AI systems without consent.”

Wetransfer affirmed, “As a company deeply embedded in the creative community, we prioritize our customers and their work. We will continue our efforts to ensure Wetransfer remains the leading product for our users.”

Founded in 2009, the company enables users to send large files via email without the need for an official account. Today, the service caters to 80 million users each month across 190 countries.

Source: www.theguardian.com

Young Iranians Strive to Overcome Internet Blackout: It Feels Like Being Trapped by Walls

a Amir* hasn’t slept in days. From his apartment in northern Tehran, the 23-year-old has spent nights searching for a vulnerable digital connection that can temporarily bypass the internet blackouts.

For 13 days, Iran has faced a nearly complete internet shutdown, severely limiting access to information since the onset of Israel’s strike until the latter part of Wednesday. However, a group of young Iranians tirelessly works to ensure their voices are heard beyond their borders.

“Using a VPN is no longer effective. To navigate this internet blackout, we’re relying on a special proxy link—essentially a ‘secret tunnel’ that channels messages through non-Iranian servers.

“These links are built into the app’s features […] They direct traffic from internal servers. Each link only works for a few hours before it fails. So, I’m constantly on the lookout for new ways to communicate with my people.”

The Iranian government blames external forces for restricting internet access during the conflict with Israel, claiming the network is being exploited for military purposes. A local source informed the Guardian that only correspondents from approved foreign media can access the internet.

While domestic messaging apps are operational, many young Iranians lack confidence in their security.

Amir remarked: “We have local apps, but they’re utterly unreliable. The government takes every opportunity to surveil us, particularly targeting student leaders.”

Last week, Amnesty International urged the authorities to lift the communication blackouts, stating it would “prevent individuals from finding safe routes, accessing vital resources, and sharing information.”

Another student leader, Leila*, 22, residing in Abbas Abado, north of Tehran, mentioned that she managed to connect again during the shutdown thanks to assistance from abroad. “My boyfriend in Europe sent me a composition link via text. Without it, nothing works. The internet sporadically operates for a few minutes before shutting down again.”

The blackout not only severed connections with the outside world but also complicated life amid the ongoing Israeli bombardment. “It feels like being enclosed by a wall,” remarked Tehran student Arash*. “We’ve lost the ability to assist each other with independent news while the sound of bombs contrasts sharply with the silence of state media.”

For Amir, the most alarming aspect is how the perception of war is becoming normalized. “We’re starting to treat this as normal,” he expressed, “but war is anything but normal.” He noted their recognition that the rattling of windows signifies either an air raid or an explosion.

The blackouts intensified his fears amid the war. “That’s what erases us… it makes us invisible. Yet here we are. We still wish to connect with a free world.”

* The name has been changed

Source: www.theguardian.com

Strategies in the Iran-Israel Conflict: Internet Blackouts, Cryptocurrency Destruction, and Home Surveillance

The ongoing conflict between Israel and Iran is not only a confrontation involving combatants, drones, and explosive devices but is also intensifying in the digital domain. Both nations have a rich history of engaging in cyber warfare. A significant point of contention is Iran’s nuclear initiative, which was famously attacked by the sophisticated Stuxnet worm—one of the early forms of cyber sabotage aimed at causing real-world damage.

In response to perceived threats, Iran recently enacted a near-total internet blackout. My colleague Johanna Bouyan provides insights:

According to CloudFlare, a cybersecurity firm, Iranian internet traffic is “currently averaging around 97% or lower compared to levels from a week ago.”

The reduction in internet speeds follows claims from an anti-Iranian hacking group, possibly linked to Israel, stating they had breached Iran’s state-owned bank, Sepa. A government spokesperson from Iran, Fateme Mohajelani, indicated on Twitter/X that officials were limiting internet access to thwart further cyber intrusion.

On Wednesday, concerns in Iran were validated. My colleague Dan Mirmo reports:

Hacking groups associated with Israel are purportedly behind a $900 million (£67 million) heist at Iran’s cryptocurrency exchange.

The group calling itself Gonjeshke Dalande, known for its predatory tactics, announced it had successfully hacked the Novitex exchange, mere days after asserting it had destroyed data at Iran’s state-owned bank.

Elliptic, a consultancy specializing in cryptocurrency crime, reported identifying over $900 million in cryptocurrency transfers to hacker wallets from Nobitex. The hackers effectively “burn” these assets, storing them in “vanity addresses” that lack encryption keys, thereby rendering them inaccessible, according to Elliptic.

Iran has attempted to retaliate; however, much like the broader conflict, Israel’s strikes appear to be more effective and disruptive. Israeli authorities have warned citizens that Iran is seizing internet-connected home security cameras to gather real-time intelligence. Bloomberg reports. Cybersecurity experts assert that Hamas and Russian hackers have employed similar tactics. While home security cameras may represent a new front in the cyber conflict, they lack the capability to interfere with central banking systems, as Israel has done.

By the end of Friday, Iran seemed to have lifted internet restrictions for some users, as reported by The New York Times. However, even those with limited access felt their connections were precarious.

City of Love? PornHub Takes a Stand Against Paris Over Children’s Age Verification Online

Photo: Nikolas Kokovlis/Nurphoto/Rex/Shutterstock

PornHub, widely regarded as the most visited adult content site globally, resumed operations in France after a three-week blackout.

The platform’s owner, Iro, suspended access in protest against a new French regulation requiring adult websites to verify user ages using credit cards or identification. Instead of implementing the age restriction, Pornhub opted to withdraw access for approximately 70 million users.

Following this, Pornhub returned online after French courts temporarily put the law on hold while reviewing its compliance with the European Union’s constitution. However, the dispute between Paris and Pornhub signifies a growing global dialogue around online age verification.

This debate occupies a challenging intersection of differing online regulations aimed at protecting children and upholding privacy and freedom of expression—an area fraught with complexity, even in the U.S., where digital regulations often aim for practicality.

As of now, over 20 states have enacted age verification laws affecting adult content websites. Pornhub has been forced to block access in 17 of these states. Texas, which boasts a population of 31 million, serves as a prime example. The state legislature passed a law in September 2023 mandating ID verification for accessing adult sites, causing Pornhub to go dark in Texas by March—greeting users with a message calling the law “invalid, accidental, dangerous.” Meanwhile, while access is still allowed in Louisiana under similar laws, site traffic has plummeted by 80%. This serves as a barrier to ID requirements. The U.S. Supreme Court is considering whether such laws infringe on constitutional rights to free speech.

Research on U.S. law indicates that these laws are ineffective in achieving their stated goals. Online search data suggests that individuals in states with age verification laws are searching for non-compliant adult sites to bypass age restrictions and using VPNs to disguise their locations from internet service providers.

Other battlegrounds extending beyond age verification include restrictions on social media for underage users. Australia, which has enacted a ban on minors accessing social media, is currently testing various enforcement technologies but has found them lacking.

Skip past newsletter promotions

The UK is emerging as the next battleground. New online safety legislation mandating age verification for adult content will take effect in July. Will London mirror Paris, or follow Texas?

Dissecting the Trump Phone

Composite: Guardian/Getty Images/Trump Mobile/Trump Watch/eBay

Last week, Donald Trump introduced a mobile phone brand named “T1,” elegantly designed with his name and emblazoned with an American flag. It is especially marketed in Alabama, California, and Florida, with a monthly service plan priced at $47.45.

However, the T1 phones face significant challenges in delivering on their promises. The manufacturer will be subject to similar market pressures as other manufacturers, where both inexpensive labor and specialized electronics expertise are largely based in China, not the U.S. This partly explains why Apple products are labeled “Designed in California.”

Looking forward, analysts predict that Trump’s proposed tariffs could cause smartphone prices to soar by double or even triple digits. Currently, the U.S. lacks a developed electronics supply chain capable of fully assembling mobile phones domestically. In April, analysts at UBS cautioned that the cost of an iPhone 16 Pro Max with 256GB might potentially rise by 79%, from $1,199 to approximately $2,150, if a total tariff of 145% were implemented. Apple seemed to acknowledge this forecast by expediting the shipment of nearly $2 billion worth of iPhones to the U.S. before tariffs on China were instituted.

An example of a mobile phone that has been assembled in the U.S., known as the Liberty Phone, is operational but not entirely manufactured there. Trump’s offerings could potentially cost around four times more than $2,000. The Liberty Phones source certain components domestically, but still require screens, batteries, and cameras that are manufactured overseas. According to the Wall Street Journal, the CEO of Purism, the company that manufactures these devices, stated that its operating system can run only basic applications like calculators and web browsers.

Although the specs for the Liberty Phone are inferior compared to the Trump T1, its price will be steeper, and the likelihood of the T1 reaching the market as promised appears slim. Many of the anticipated technical features of the T1 come at a price point nearly double that of what Trump has claimed. A comprehensive list compiled by The Verge suggests that Chinese firms might manufacture phones under Trump’s brand label.

Eric Trump, who co-manages this venture alongside his brother Donald Jr., admitted that the initial batch of T1 phones was not made in the U.S. “Eventually, every phone will be produced in the United States,” Eric Trump reassured. He added last week. I understand.

Read more: Why can’t mobile phones be repaired in the U.S. to avoid Trump’s tariffs?

Wider Technology

Source: www.theguardian.com

Can AI Agents Regain Control of the Internet from Big Tech?

Autonomous AI agents may soon communicate across the Internet

Outflow Designs/Istockphoto/Getty Images

What does the future of the internet hold? As AI companies evolve, previously open web spaces are being overtaken by digital silos controlled by commercial AI models, sidelining enthusiasts and small businesses. In response, a coalition of grassroots researchers is determined to champion an open approach to AI.

Central to this effort is the notion of AI “agents.” These are software programs that navigate the web and interact with online platforms based on human directions, such as planning holidays and making bookings. Many perceive these agents as the next stage of evolution following services like ChatGPT, yet they face significant challenges in functionality. This is largely due to the web’s design, which favors human interaction; thus, developers are recognizing that AI agents require specialized protocols to effectively engage with online content, services, and each other.

“The objective is to establish infrastructure that facilitates communication among bots, much like software does,” explains Catherine Frick from Staffordshire University, UK.

Several competing solutions to this challenge have emerged. For instance, Anthropic, the creators of the Claude chatbot, have introduced the Model Context Protocol (MCP), which standardizes the way AI models connect to various data sources and tools. In April, Google announced its own take with the Agent2Agent (A2A) protocol, offering a distinct approach to this concept.

While these methods share similarities, they have important differences. MCP focuses on standardizing AI models’ connections to external data repositories and tools, creating a secure universal channel for two-way communication—akin to having a phone number or email for messaging. In contrast, A2A expands on this by enabling autonomous agents to discover one another, exchange information, and collaborate on tasks.

For instance, you can link your AI chatbot to the code-sharing platform GitHub via MCP, yet Google asserts that A2A could enable agents to manage job interviews, conduct calls, perform background checks—all in one streamlined process, with the agent team operating simultaneously.

However, as these protocols originate from major tech companies, concerns arise that the creators of the most successful protocol might leverage it for their own commercial gain. The MCP model necessitates oversight from a central server for connections, whereas A2A comes with its own costs, assuming that authorized agents will cooperate.

“We want to prevent an ‘Agent Internet’ from evolving into yet another ‘silo alliance,'” warns Gawee Chan, who serves as chairman of the AI Agent Protocol Group. Founded in May as part of the World Wide Web Consortium (W3C) Standards Organization, Chan emphasizes the importance of inclusivity in developing this new layer of the Internet. “If we genuinely believe that AI is a transformative technology for human society, we need an open, neutral community to guide protocol development, ensuring that its future is shared by all companies, not just a select few,” he states.

In pursuit of this goal, Chang has initiated an open-source alternative to the Big Tech Agent Protocol with the Agent Network Protocol (ANP), which predates both MCP and A2A. ANP facilitates AI agents in discovering each other and establishing identities across the web, reminiscent of the early days of the internet, when individuals created personal websites and email accounts independent of large tech intermediaries. This autonomy allows ANP-driven models to function without a central authority, enabling direct communication between distinct AI models on the same device without needing internet verification.

Flick supports the emergence of open-source, non-commercial alternatives for Agent AI. “Essentially, our aim is to restore the fundamental principle of democratization to the Internet, which is how it all began,” she remarks. Without such alternatives, she warns that tech giants could create “walled gardens” reminiscent of the challenges seen in app stores and social media platforms. “If we rely on major corporations for this, they will execute it primarily to maximize profits,” she cautions.

Google claims that its protocols are designed for universal benefit. “We will continue to enhance [A2A] to tackle real-world challenges businesses face in deploying agent frameworks. At its core, it’s structured for the future’s demands,” says Rao Sarapaneni from Google Cloud.

“We have always believed in ensuring that advancements in AI serve everyone,” adds Theo Chu, an anthropologist. “When I developed MCP, I recognized that one key strategy to avoid fragmentation and vendor lock-in—which hampers the advancement of other technologies—was to make it open-source.”

Chu asserts that MCP is integrated across major platforms, including Microsoft, OpenAI, and Google. “The success of MCP will stem from its ability to expand choices rather than restrict them,” she notes. “The collective value of the ecosystem is increased for everyone.”

The W3C Group is eager to collaborate with all stakeholders to establish technical standards industry-wide, but no specific timeline has been set. “Ultimately, our focus isn’t on the triumph or failure of any one protocol but rather on the holistic growth of the agent ecosystem.”

Topics:

Source: www.newscientist.com

My Sister’s Death Led Me to Uncover Her Search History and Online Life

a
Dele Zeynep Walton sensed something was off when she emerged from a caravan in New Forest at 8 am, camping with her boyfriend. Initially frustrated by the early start, she quickly realized the car was off course, and upon approaching, found her mother appeared “hysterical.” “Right away,” she recalls, “I thought, ‘That’s Amy.'”

Amy, Walton’s younger sister, was 21 and had been struggling with mental health issues for several months. She had a passion for music technology and art, with her stunning self-portraits adorning their family home in Southampton. A big fan of Pharrell Williams, she once received five calls to join him on stage at a concert. However, as her mental health declined, she became increasingly unreachable. “For two months, I had no idea where she was or what she was doing,” Walton says.

That October morning in 2022, Walton uncovered a devastating truth. Amy was found dead in a hotel room in Slough, Berkshire, presumed to have taken her own life. In the following days, Walton and her family would begin to understand Amy’s path—a journey facilitated by a complex web of online connections.




She loved music and art… some of Amy’s self-portraits in her family home. Photo: Peter Fluid/Guardian

Walton, a 25-year-old journalist, pieced together that Amy had engaged with a suicidal promotion forum that the Guardian opted not to name. This site is
linked to at least 50 deaths
in the UK and is currently
under investigation by Ofcom, a regulator under the online safety law. Police investigating Amy’s death revealed that at this forum, Amy learned how to obtain the substance that ended her life and met the man who flew to Heathrow to accompany her at the end. (He was initially charged with assisting suicide, but no further action was taken.)

Sitting in the garden of her parents’ house in Southampton, Walton describes how she came to write about the events that transpired. Her book,
Logoff: Human costs in the digital world
is partly a tribute to her sister and partly an exploration of the implications of everyday web browsing, fate, and the digital world that can perpetuate harm.

“I thought: I need to dedicate myself to uncovering this. Why is the public unaware of these ongoing harms? Because they are constant.” She references Vlad Nikolin-Caisley from Southampton, saying that earlier this month,
a woman was arrested
on suspicion of aiding his suicide.

With a review of Aimee’s death in June, Walton hopes that online factors will be included in the investigation and that “online harm” will be acknowledged as a cause or contributing factor in her sister’s death.

This phrase has become familiar to her. “Until I lost Amy, I didn’t understand what ‘online harm’ meant,” she reflects. She first heard the term from
Ian Russell, Molly’s father and a campaigner for online safety. Molly Russell was 14 when she took her life after being exposed to images and videos of self-harm. Uniquely, the coroner stated that online activity “had contributed to her death in a minimal way.” Walton hopes a similar perspective will be taken in her sister’s case, believing that calling it “suicide” alone fails to account for the impact of the digital world and places unfair blame on Amy while leaving it unregulated.




“We can become vulnerable at any time in our lives”… Amy’s photo. Photo: Peter Fluid/Guardian

Initially labeling her sister’s death a “suicide,” Walton now feels this term no longer adequately reflects Amy’s situation. When suicide is seen as a voluntary action, how much choice does a person really have when influenced by an intentional online community? And if individuals are genuinely free to choose, Walton questions, how does the algorithm continuously presenting Amy with self-harm content shape her experience? “That’s where it becomes hard for me to label it a suicide,” Walton asserts. “My intuition tells me Amy was groomed and that her decision was not entirely hers.”

Her deep dive into these issues has transformed Walton into an activist. She collaborates with
Bereaved Families for Online Safety
and serves as a young people’s ambassador for
People vs Big Technology. “We must address these issues head-on,” she emphasizes. “If we don’t, it fosters the belief that online safety is solely a personal responsibility.”

Walton recounts how police indicated that the man who accompanied Amy at the hotel had shared the room for 11 days prior to her passing. The room contained Amy’s notes, but Walton mentioned they were so filled with pain that they were unreadable. He later told police that he was “working.” She reveals that the man called 999 after Amy ingested the toxic substance but declined to administer CPR. Amy has since been linked to 88 deaths in the UK and the toxic substances are purportedly sourced from Kenneth Law, a Canadian under investigation by the National Crime Agency.

A New York Times investigation revealed the forum was established by two men. Walton visited the forum herself, wanting to trace her sister’s final interactions. “Many posts essentially say, ‘Your family doesn’t care about you; you should do this.’ They phrase it, ‘When are you getting on the bus?'”

Walton views this forum as a form of radicalization towards extreme behaviors that individuals may never have contemplated. She is alarmed by the thought that the man with Amy may have been “living a twisted fantasy as an incel, where a vulnerable young woman seeks to end her life.”

Prior to Amy’s death, Walton held a neutral stance on technology. Now, she describes, “The digital world is a distorted reflection of our offline world, amplifying its dangers.” In her book, her consideration of online harm victims spans a range of experiences, from Archie Batasby, who visited TikTok on the day he suffered a life-changing brain injury, to Meareg Amare Abrha, a university professor in Ethiopia who was killed after posting provocatively on Facebook. She also contemplates Amazon workers striving for better pay and conditions, alongside “Tony,” a 90-year-old neighbor who faced digital exclusion yet taught Walton how to use smartphones.

“For too long, the facade of technology has been equated with progress and innovation, which is a notion I challenge in my book,” she asserts. She recalls infamous public figures like Zuckerberg, Cook, Pichai, Bezos, and Musk, questioning, “Where are the engineers?” and stressing the interconnectedness of these power networks.




“The campaign allows survivors to regain control”… Amy’s bedroom in her family home. Photo: Peter Fluid/Guardian

Yet, Walton sometimes describes her experience as akin to being the digital equivalent of climate scientists from the 1970s. She acknowledges that her relationship with technology is complex, much like Amy’s. Her cherished memories of playing together revolve around their family computer in their parents’ bedroom.

“Chadwick and the Despicable Egg Thief – there’s video of us playing at 3 years old. We’ve played Color Games repeatedly. I’ve been taking photos with a ‘Digicam’ since I was 8, not to mention Xbox, Nintendo, computers—all just for fun!”

In a way, Walton describes her existence as a “double life.” Her book critically examines her own habits. While writing it, she lived in tracksuits, yet none of her
Instagram
posts reveal this journey. She uses the app to limit her screen time and shares
TikToks about “logoff.” Video calls have also allowed her family to “grieve together” after her sister’s passing, many of whom reside in Türkiye.

Promoting her book has made it tough to detach from screens. “I feel like a hypocrite!” she admits. “My screen time this week is nine and a half hours.”
A day? “I don’t like it,” she replies, “I typically average six hours.”

Ultimately, she doesn’t aim for perfection, stating, “I’m in control of it all, guys.”


In her book, Walton notes, “The campaign allows survivors to reclaim the control that was taken from them,” a sentiment that resonates with her as the process seems exhausting. “Did I say that?” she questions, surprised. “But if I hadn’t engaged in this, where would that anger go? It would consume me and make me unwell.”

She has also engaged local MPs (first Royston Smith, then Darren Puffy), and Secretary of State Peter Kyle to seek answers about what occurred with Amy. “When we discuss online safety, it’s often framed in terms of protecting children. While that’s crucial, I also represent Amy; it’s about all of us. We can become vulnerable at any stage in our lives. If we focus solely on children’s safety, we become 18 and still don’t know how to navigate a healthy digital life,” she explains.

“I feel it’s my duty to Amy since I wish I could have shielded her.” Her eyes glisten with unshed tears.

Balancing her grief with activism has proven challenging. “Some days I genuinely can’t handle it, or I just need a day in bed, as my body struggles to keep pace with all the emotional weight.”

“But this is my mission. Those in power only act if they feel the weight of this pain. If Mark Zuckerberg experienced the loss of a child due to online harm, perhaps he would finally understand, ‘Oh my God, I need to pay attention.'”


Logoff: Human costs in the digital world Adele Zeynep Walton will be published by Trapeze on June 5th (£20). To support the Guardian, consider ordering a copy at
Guardianbookshop.com. Shipping fees may apply.


In the UK and Ireland, contact
Samaritans at Freephone 116 123 or email jo@samaritans.org or jo@samaritans.ie. In the US, call or text
National Suicide Prevention Lifeline at 988, chat at
988lifeline.org, or
text HOME to reach a crisis counselor at 741741. Crisis Support Services in Australia can be reached at
Lifeline at 13 1114. Additional international helplines are available at
befrienders.org.


Source: www.theguardian.com

China’s Unexpected Surge in Regional Internet Censorship: A Research Overview

Authorities in China seem to be rolling out a more stringent version of the internet censorship system in Henan province, imposing tighter controls over information access for its tens of millions of residents compared to others in the country.

A research paper published by the Great Firewall Report this month indicates that internet users in Henan—one of China’s most densely populated provinces—were blocked from accessing five times as many websites from November 2023 to March 2025 compared to the national average.

“Our findings highlight striking instances of censorship emerging in the region,” stated the researchers, including authors from the University of Massachusetts Amherst and Stanford University.

China has established the most advanced and extensive internet censorship system globally. Users are barred from accessing a majority of Western news sites and social media platforms, which includes popular services provided by Google, Wikipedia, and Meta.

Under the “Great Firewall,” online content is scrutinized and censored by a combination of governmental bodies and private companies that adhere to regulations requiring removal of content deemed “sensitive.” This often involves topics regarding historical or current events that conflict with the official narrative of the Chinese Communist Party.

Researchers began their investigation after residents in Henan reported that many sites accessible elsewhere in China were unavailable in their province. They discovered millions of domains not blocked by central firewalls at one point that were inaccessible to Henan users.

By acquiring a server from a cloud provider, the authors monitored internet traffic within Henan. They conducted daily tests on the top 1 million domains from November 2023 to March 2025, revealing a significant rise in blocks during 2024. The results indicated that Henan’s firewall obstructed around 4.2 million domains during the survey period—over five times the roughly 741,500 domains obstructed by regular Chinese censorship measures.

The domains specifically blocked in Henan predominantly came from business-related websites. Recent financial protests in the province have led researchers to theorize that increased information control might stem from concerns about their managed economy.

In 2022, thousands in Henan participated in protests after being denied access to their bank accounts. The situation escalated when demonstrators found their mobile health codes—essential for pandemic management—turned red, restricting their movement. Subsequent to this, five staff members faced penalties for misusing health regulations to quash the protests.

Other regions of China have also seen heightened internet restrictions. For example, after a deadly ethnic riot in July 2009, the government imposed a ten-month internet blackout in Xinjiang, a Uyghur minority region in Western China. Thereafter, internet usage in Xinjiang has been monitored much more rigorously than in other areas, with Tibet also facing strict online controls.

The rise of a regional censorship regime in Henan is notable as it is not typically identified as a hotspot for such measures by Chinese authorities.

Researchers have not been able to ascertain whether the intensified controls were imposed by the local Henan government or the central government in Beijing.

Skip past newsletter promotions

The swift advancements in Chinese AI technologies have proven beneficial for both censorship enforcement and evasion efforts. Recently, China’s Ministry of Public Security (MPS) announced new monitoring tools enabling surveillance of users on virtual private networks (VPNs), designed to bypass internet restrictions. The MPS Institute has also introduced tools claiming to monitor accounts on Telegram, reportedly processing over 30 billion messages.

Minshu Wu, the lead author of “Henan Studies,” uses pseudonyms to safeguard their identity. Conversely, AI technologies can also be utilized to develop more sophisticated and adaptive censorship and monitoring tools.

The Henan Cyberspace Issues Committee has not responded to requests for comment.

Additional contributions by Lilian Yang

Source: www.theguardian.com

Amazon Challenges Musk’s Starlink with Launch of First Internet Satellite

Amazon’s Kuiper Broadband Internet Constellation successfully launched its first 27 satellites into space from Florida on Monday, marking the beginning of a significant rollout of space-based internet networks, comparable to SpaceX’s Starlink.

These satellites are the initial part of a larger plan to deploy 3,236 at low Earth orbit as part of Project Kuiper. Launched in 2019, this billion-dollar initiative aims to deliver beam broadband internet globally to consumers, businesses, and government entities. SpaceX has been a notable client in this competitive landscape due to its robust Starlink operations.

Launched aboard the Atlas V rockets from Boeing and Lockheed Martin’s Joint United Launch Alliance, the batch of 27 satellites lifted off at 7 PM EDT from Cape Canaveral Space Force Station. The initial launch attempt on April 9th was postponed due to bad weather.

Project Kuiper represents Amazon’s largest venture into the broadband sector, entering the fray against Starlink and established telecom providers like AT&T and T-Mobile. The company aims to enhance connectivity in rural areas where access is limited or absent.

The deployment of the first operational satellite faced delays exceeding a year, with Amazon initially targeting early 2024 for its first batch. The Federal Communications Commission has set a deadline for Amazon to launch 1,618 satellites by mid-2026, prompting the company to likely seek an extension.

Following the launch, Amazon anticipates publicly confirming initial contact with the satellites from its Mission Operations Center in Redmond, Washington, within hours or days. If successful, the company expects to commence customer service later this year.

According to ULA CEO Tory Bruno, five more Kuiper missions can be launched this year. Amazon indicated in its 2020 FCC filing that it could start service with some of its 578 satellites in the northern and southern regions, gradually extending towards the equator as more satellites are deployed.

As an ambitious initiative in a market primarily dominated by SpaceX, Project Kuiper reflects Amazon’s extensive experience in consumer products and established cloud computing services, positioning itself as a competitor to Starlink.

In 2023, Amazon successfully launched two prototype satellites, paving the way for further developments. The program had maintained a lower profile until unveiling its initial Kuiper launch plans earlier this month.

SpaceX enjoys a unique advantage, serving as both a satellite operator and launch provider with its reusable Falcon 9 rockets, having placed over 8,000 Starlink satellites into orbit since 2019. Monday marked the 250th dedicated Starlink launch, with a rapid deployment schedule of at least one mission per week to enhance network bandwidth and replace outdated satellites.

This accelerated pace has led to SpaceX acquiring over 5 million internet users across 125 countries, boosting the global satellite communications market while supporting military and intelligence operations through Starlink’s advanced capabilities.

Amazon’s executive chair, Jeff Bezos, expressed optimism regarding Kuiper’s competitive potential against Starlink, noting to Reuters in a January interview that there is “an insatiable demand.”

Skip past newsletter promotions

“There’s a lot of room for winners there. Starlink expects it will continue to succeed, and Kuiper expects it will succeed,” Bezos stated.

“It will be primarily a commercial system, but these LEO constellations have defensive applications as well,” he added, referring to low Earth orbit.

In 2023, Amazon unveiled the Kuiper Consumer terminal, a compact antenna the size of an LP record that connects with overhead Kuiper satellites, along with a small terminal comparable to Kindle e-readers. The company aims to produce devices for tens of millions of users, each costing less than $400.

In 2022, Amazon secured 83 rocket launches from French Arianespace and Blue Origin ULA.

Source: www.theguardian.com

Amazon Unveils Kuiper Internet Satellites: Key Insights You Need to Have

The competition in space between billionaires Jeff Bezos and Elon Musk is poised to expand into satellite internet.

Originally launched as an online bookstore three decades ago, Amazon has evolved into a merchandising powerhouse, owning the James Bond franchise and retailing electronics like the Echo smart speaker, along with being a leading provider of cloud computing services.

Thus, it’s no surprise that Amazon is rolling out the first batch of thousands of satellites under Project Kuiper, designed to provide connectivity in our modern world. The high-speed internet market from space is largely dominated by Elon Musk’s SpaceX, which offers a similar service. Starlink boasts a vast fleet of satellites and regularly conducts launches, serving millions globally.

The initial attempt to launch a satellite on April 9 was postponed due to unfavorable weather conditions at the launch site. The company is set to make another attempt this coming Monday.

The first 27 Project Kuiper satellites are scheduled for launch on Monday from Cape Canaveral Space Force Station in Florida, between 7 PM and 9 PM Eastern Time. They will be lifted aboard the Atlas V rocket, developed by the United Launch Alliance—a collaboration between Boeing and Lockheed Martin.

ULA plans to provide live coverage starting at 6:35 PM; the company reports a 70% chance of an on-time launch.

The rocket will place the Kuiper satellites into a circular orbit approximately 280 miles above Earth. The satellites’ propulsion systems will gradually elevate them to an orbit of 393 miles.

Project Kuiper comprises a network of internet satellites designed to deliver high-speed data connections to nearly every location on Earth. To achieve this, thousands of satellites are necessary, with Amazon aiming to deploy over 3,200 within the next few years.

The project competes with SpaceX’s Starlink, which primarily caters to residential customers.

Kuiper aims to target remote areas while also integrating with Amazon Web Services, the cloud computing solution that is highly valued by large enterprises and governments worldwide. This could make it particularly appealing for businesses needing satellite imagery and weather forecasts to carry out data processing, alongside the capacity to transfer large volumes of data over the internet.

Ground stations will link the Kuiper satellites to the service infrastructure, allowing businesses to interact with their own remote devices. For instance, Amazon indicates that energy firms could leverage Kuiper to monitor and manage remote wind farms and offshore drilling operations.

In October 2023, two prototype Kuiper satellites were launched for technology testing. Amazon stated that the tests were successful, but these prototypes were not intended for long-term operational constellations; after seven months, they re-entered the atmosphere. The company noted that they have since refined the design of all systems and subsystems.

“There’s a significant difference between launching two satellites and launching 3,000 satellites,” remarked Rajeev Badyal, an Amazon executive overseeing Kuiper, in a promotional video ahead of the launch.

Amazon informed the Federal Communications Commission in 2020 that the service would commence after the deployment of the initial 578 satellites. The company anticipates that customers will be able to access the internet later this year.

While a fully operational constellation requires thousands of satellites, it is feasible for the company to serve certain areas with fewer satellites initially, expanding to broader global coverage later.

The FCC’s approval for the constellation stipulates that at least half of the satellites must be launched by July 30, 2026. Industry experts suggest that if significant progress is shown by that deadline, the company could be granted an extension.

Launching a satellite also relies on the timely availability of rockets, which can present challenges if there aren’t enough launches lined up. Additionally, Amazon must construct numerous ground stations to relay signals to users.

Source: www.nytimes.com

Critics accuse Ofcom of putting high-tech companies’ interests ahead of online child safety

The Communication Watchdog is accused of endorsing major technology for the safety of under-18s after England’s children’s commissioners criticized new measures to address online harm. Rachel de Souza warned Offcom last year that the proposals to protect children under online safety laws are inadequate. She expressed disappointment that the new code of practice published by WatchDog ignored her concerns, prioritizing the business interests of technology companies over child safety.

De Souza, who advocates for children’s rights, highlighted that over a million young people shared their concerns about the online world being a significant worry. She emphasized the need for stronger protection measures and criticized the lack of enhancements in the current code of practice.

Some of the measures proposed by Ofcom include implementing effective age checks for social media platforms, filtering harmful content through algorithms, swiftly removing dangerous material, and providing children with an easy way to report inappropriate content. Sites and apps covered by the code must adhere to these changes by July 25th or face fines for non-compliance.

Critics, including the Molly Rose Foundation and online safety campaigner Beavan Kidron, argue that the measures are too cautious and lack specific harm reduction targets. However, Ofcom defended its stance, stating that the rules aim to create a safer online environment for children in the UK.

The Duke and Duchess of Sussex have also advocated for stricter online protections for children, calling for measures to reduce harmful content on social media platforms. Technology Secretary Peter Kyle is considering implementing a social media curfew for children to address the negative impacts of excessive screen time.

Overall, the new code of practice aims to protect children from harmful online content, with stringent measures in place for platforms to ensure a safer online experience. Failure to comply with these regulations could result in significant fines or even legal action against high-tech companies and their executives.

Source: www.theguardian.com

Ofcom introduces new regulations for tech companies to ensure online safety for children

As of July, social media and other online platforms must block harmful content for children or face severe fines. Online Safety Law requires tech companies to implement these measures by July 25th or risk closure in extreme cases.

The Communications Watchdog has issued over 40 measures covering various websites and apps used by children, from social media to games. Services deemed “high-risk” must implement effective age checks and algorithms to protect users under 18 from harmful content. Platforms also need to promptly remove dangerous content and provide children with an easy way to report inappropriate material.

Ofcom CEO Melanie Dawes described these changes as a “reset” for children online, warning that businesses failing to comply risk consequences. The new Ofcom code aims to create a safer online environment, with stricter controls on harmful content and age verification measures.

Additionally, there is discussion about implementing a social media curfew for children, following concerns about the impact of online platforms on young users. Efforts are being made to safeguard children from exposure to harmful content, including violence, hate speech, and online bullying.

Skip past newsletter promotions

Online safety advocate Ian Russell, who tragically lost his daughter to online harm, believes that the new code places too much emphasis on tech companies’ interests rather than safeguarding children. His charity, the Molly Rose Foundation, argues that more needs to be done to protect young people from harmful online content and challenges.

Source: www.theguardian.com

Quantum data sent securely through conventional internet cables

There could be a secure quantum internet in the middle

vs148/shutterstock

Another step to the quantum internet has been completed and no special communication equipment is required. Two German data centers have already used existing communication fibers to exchange quantum safe information at room temperature. This is in contrast to most quantum communications, and in many cases it requires cooling to very low temperatures to protect quantum particles from environmental disturbances.

Thanks to being encoded into quantum particles of light, known as photons, the quantum internet, which allows for extremely secure exchange of information, is rapidly expanding into the world outside of labs. In March, microsatellites enabled quantum links between China’s ground stations and South Africa. A few weeks ago, the first operating system for quantum communications networks was announced.

now, Mirko Pittaluga Toshiba Europe Limited and his colleagues are sending quantum information through optical fibers between two facilities, approximately 250 km apart, in Kehl and Frankfurt, Germany. This information passed through the third station between them, just over 150km from Frankfurt.

Photons can be lost or damaged when crossing long distances through fiber optic cables, so large quantum internet iterations require “quantum repeaters” and reduce these losses. In this setup, the midway station played a similar role, allowing the network to outweigh the simpler connections between the two previously tested endpoints.

In a notable improvement on previous quantum networks, the team used existing fibers and devices that could be easily slotted into racks that already house traditional communication equipment. This enhances the case where Quantum Internet will ultimately become plug-and-play operations.

The researchers also used photon detectors that cost much less than those used in previous experiments. Although some of these previous experiments spanned hundreds of kilometers, they say that using these detectors reduces both the cost and energy requirements of the new network. Raja Yehea At the Institute of Photonic Science in Spain.

Premkumar Northwestern University in Illinois says that using the types of quantum communications protocols here on commercial equipment highlights how quantum networks are approaching practicality. “Systems engineers can see this and see that it works,” Kumar says. However, he says that in order to be completely practical, networks need to exchange information faster.

Medi Namaji Quantum Communication Start-Up Qunnect in New York says that this approach could be beneficial for future networks of quantum computers or quantum sensors, but it is not as efficient as involving true quantum repeaters.

topic:

Source: www.newscientist.com

Parent educates on internet safety after child’s Roblox issue: Man approached him

dAvid, 46-year-old father from Calgary, Canada. My 10 year old son didn’t see any problems at first I started playing on Roblox, a user-generated gaming and virtual environment platform, especially among younger gamers, which has exploded in popularity in recent years.

“We thought he was a way to maintain a level of social interaction during the blockade of the community,” David said he assumed that his son would use the platform’s chat feature to speak to friends he personally knows.

After a while, his parents found him talking to someone in his room in the middle of the night.

“We discovered that a man from India approached him and approached him with Roblox and mentored him to bypass our internet security management,” David said. “This person persuaded his son to take nude pictures and videos that he compromised and send them via Google Mini.

“It was tough to get to the root of why my son did it. I think he was lonely. I thought this was a real friend. I think he was given a gift to Roblox, who made him feel special. It was truly the worst nightmare for all parents.”

David was among parents all over the world who often shared with the Guardian that primary school children were either heavily affected or had serious harm from the games at Roblox. Many confirmed the results of reports last year that Roblox allegedly exposed children to grooming, pornography, violent content and abusive speech.

Some parents said Roblox was a creative outlet for their children and brought joy to them or improved some of their skills, such as communication and spelling, but the majority of parents who were in touch with expressed serious concern. These were primarily about the incredible levels of addiction we observed with our children, but also about extreme political images such as parental control, grooming, emotionally horrifying mail, bullying, and avatars of Nazi uniforms, as well as examples of traumatic content in games that children can access despite inappropriate talking to children on the platform.

“Deeply disturbing” research By digital behavior experts who reveal reality The 5-year-old was able to communicate with adults while playing games on the platform.

Roblox admitted in response to the possibility that children playing on the platform could be exposed to harmful content and “bad actors.” This is an issue that the company claims to be working hard to fix it, but requires industry-wide collaboration and government intervention. The company said “I have deep sympathy.”

The newly announced, additional safety tools aimed at giving parents greater flexibility to manage their children’s activities on the site, have failed to convince many of the parents the Guardian spoke to.

“I don’t think the change will address my concerns,” said Emily, Hemel Hempstead’s mother.

“The new features are useful, but they don’t stop children from accessing inappropriate or scary content. People are allowed to choose an age rating for the game they create, and they may not always be appropriate or accurate.

Her 7-year-old daughter said that her 7-year-old daughter was asleep as she was shot after Roblox’s game took her to a room with an avatar where she was introduced as “your dad.”

Despite Roblox claiming to have introduced “new easy-to-use “remote management” parental controls,” parents found it extremely difficult to navigate parental control settings and said it takes several hours to review their child’s activities regularly. It was also impossible to tell people that many people were behind their usernames.

“Roblox monitors the type of language used, such as blasphemy, but there is no real way to policing players’ age.

The company highlighted last year that it defaulted to the fact that under 13 years of age could no longer send messages directly to others on Roblox, outside of gaming or experience.

However, Roblox admitted that it struggles to verify users’ age, saying, “age verification for users under 13 is still a challenge for the wider industry.”

Nelly*, a Dublin mother in her 40s, said she had just finished a play therapy course to process sexual content her 9-year-old daughter saw on Roblox, which caused a panic attack.

“I thought it was okay to play,” she said. “I didn’t allow her to be friends with strangers either, and I thought this would be enough, but it wasn’t.

“There was an area where she went, people were wearing underwear and someone went in and lying on her.”

Many parents felt that Roblox was exploiting his child’s “underdeveloped impulse control.” As one father said, he constantly gave them a nidge to gamble and stay on the platform, urging many children to lose interest in other activities in the real world.

Jenna, from Birmingham. Two months after her children began playing Roblox, they were able to see their “all life” [had] It is carried over by the platform and reflects the statements of other parents’ scores.

“I feel like I’m living with two addicts,” she said. “If they’re not playing, they want to watch a video about it… When they’re told to go off, it’s like you cut them off from their final fix – screaming, arguments, sometimes pure rage.”

Peter, 51, a London artist and father of three boys, said that his 14-year-old son became so engrossed in Robras and his devices that he was generally violent, breaking the windows with his fist when the game was turned off.

“People who run Roblox don’t give parents shit that they can’t control the game. We didn’t try everything. We’re in treatment now,” he said.

Roblox CEO I advised my parents To keep children away from the platform if they feel worried. Maria, a mother of three from Berkshire, felt that her children were socially excluded when they were offline, making it difficult for parents to do so, and was among many who emphasized that they had unlocked the monetization elements of the platform – the higher game levels and personalization features, becoming a status symbol between the children.

In a statement, Roblox said: “We deeply sympathize with parents who described their children’s negative experiences at Roblox, which is not something we strive for and does not reflect the online civic space we want to build for everyone.

“Ten millions have positive, rich, safe experiences at Roblox every day in a supportive environment that encourages connection with friends, learning and developing important STEM skills.

*Name changed

Source: www.theguardian.com

Introducing Amazon’s Groundbreaking Project: Kuiper Internet Satellites

The billionaire battle in space between Jeff Bezos and Elon Musk has entered a new arena, the satellite internet.

Started as an online bookstore 30 years ago, Amazon is Merchandising Behemoth, the owner of the James Bond franchise, and is a seller of electronic gadgets like the echo smart speaker and one of the most powerful providers of cloud computing.

So it’s not surprising that Amazon is launching the first few of the thousands of satellites known as Project Kuiper, offering another option to keep them connected in the modern world. The marketplace that brings high-speed internet from orbit to the ground is now dominated by Elon Musk’s SpaceX Rocket Company, which operates similar services. Starlink has thousands of satellites in orbit and has more launches almost every week, and Starlink already serves millions of customers around the world.

The first 27 projects Kuiper Satellites are scheduled to lift from Cape Canaveral Space Force Station in Florida at 7pm Eastern time on Wednesday.

They fly on Atlas V, a rocket created by the United Launch Alliance, a joint venture between Boeing and Lockheed Martin. ULA plans to do it Provides live coverage From 6:35pm

The forecast only predicts 20% of the chances that winds and showers from coastal storms are likely to cause potential problems. However, there is a two-hour window where the load of propellant on the rocket begins and the launch may occur.

The spacecraft deploys the Kuiper satellite in a circular orbit 280 miles above the surface. The satellite’s propulsion system gradually raises its orbit to an altitude of 393 miles.

Project Kuiper is the constellations of Internet satellites aimed at providing high-speed data connections to almost every point on Earth. To make this a success, you’ll need thousands of satellites. Amazon’s goal is to operate more than 3,200 over the next few years.

The company competes with SpaceX’s StarLink, which was originally sold primarily to residential customers.

Kuiper aims to make its market, especially remote locations, but will also be integrated with Amazon Web Services, the company’s cloud computing product popular with large companies and governments around the world. This could make it more attractive for businesses with satellite images and weather forecasts that need to perform data calculations, as well as moving large amounts of data throughout the Internet.

Ground stations connect Kuiper satellites to the web service infrastructure in a way that allows businesses to communicate with their own remote devices. For example, Amazon suggests that energy companies can use Kuiper to monitor and control remote wind farms and offshore drilling platforms.

In October 2023, two prototype Kuiper satellites were launched and the technology was tested. Amazon said the test was successful. These prototypes were not intended to be useful in operational constellations, and after seven months they returned to a burnt-out atmosphere. company I said Since then, we have updated the design of “subsystems on all systems and subsystems.”

“There’s a huge difference between launching two satellites and launching 3,000 satellites,” said Rajeev Badyal, Amazon executive who works for Kuiper, in a promotional video before its launch.

Amazon told the Federal Communications Commission in 2020 that the service would start after deploying the first 578 satellites. The company says it expects customers to connect to the internet later this year.

A fully functional constellations require thousands of satellites, but the company is able to serve in certain regions with far fewer orbits before expanding into later, more global coverage.

The approval of the FCC constellations required that at least half of the satellites be deployed by July 30, 2026. Industry analysts say if they show significant progress by then, the company can get an extension.

Putting a satellite into orbit also depends on the launch of the rocket that occurs on a schedule. This can be a problem if sufficient rockets are not available. Amazon also needs to build hundreds of ground stations to relay signals to users.

Source: www.nytimes.com

First trip to Casablanca without a phone or internet

According to my pathetic map, I should have been near the Royal Palace. However, in Casablanca’s bustling Mars Sur Tank Quarter, streetcars rang past shoe stores and cafes, making them seem less cool remote. I tried one street and the following: Finally, I approached a teenage girl wearing jeans and headscarf downing diet coke outside the snack bar.

“I’m looking for a palace,” I said in elementary French, pointing to my map. “I say it should be near here.”

One of the girls glanced at the wrinkled paper and in a voice loaded with teenage emptying, “You don’t have one?” phone?

No, I didn’t have a phone. Rather, I did, but I wasn’t using it.

Except for buying a plane ticket, my plan was to explore Casablanca, a Moroccan city I’ve never visited, without using the internet. That is, there were no online research, GPS, Uber, Airbnbs, virtual dictionaries, and no mindless scrolling to avoid social awkwardness.

When many of us feel more and more of the need for digital detox, I am deeply aware of how the internet has deteriorated due to all its benefits. It not only played an important role in overtourism, it flattened the sense of discovery. By perusing restaurant menus, visualizing the site and compiling a must-see list, the Internet will tell you what you will experience before you arrive.

I could have used the guidebook, but it seemed to be against the spirit of effort. After all, my main goal was to see if I would recover the chances of exploration. And along the way I learned some retro travel lessons.

After leaping into Mohammed V airport in Casablanca, my first business was to find a map. I approached the woman sitting at what I took to become an information desk. “Of course I have a map,” she replied. “I have a phone.”

But she led me towards the train to the city centre. When I arrived at the airy station, I realized how difficult it is to have the plugs unplugged here. There was no sign for “You’re here” and there was no place to hide my luggage while I was pointing in the direction, and a clear sign of that direction led to the city centre.

There was no map yet, so I chose the direction and started walking. The palm-lined boulevards looked like a good bet, and soon I was inside the shops and restaurants. Over the gates of what became an old medina, I saw a hand-drawn sign.Ryad 91.

I have known from previous trips, from trips to other Moroccan cities that “riad” or “riad” means “inn.” Soon, Mohammed, a tall, glasses-wearing man, welcomed me in the cushioned-bedecked lobby and didn’t seem to offend me when he asked me to see the only remaining room, a dig of 360, or about $37. It was simple and clean, but claustrophobic and had an open window in the interior courtyard. The next day, I decided to look for something more spacious and got into my room.

In the meantime, I asked Mohammed for a map. “A minute,” he said, sitting on his computer and printing it out from Google. There are about 12 streets named above. The rest was tangled in the lines.

The good thing about ignorance is that it can turn everything into discovery. And there were many things that fascinated me along the winding alleys of Casablanca: the elegant minaret. A bakery that pulls hot, flat bread from an outdoor oven. A splash of vibrant street art on a whitewashed wall named after Casablanca.

My wandering began outside the inn door. Keeping the harbor to the right, I meandered west through the noisy food market. There, vendors were selling fat walnuts from their carts. As I walked along the fortress that was built when Portugal ruled the harbor, I saw a huge structure. We asked the boys jumping into the sea from the rocky beach and what it was. “C’est La Plus Grande Mosquéedu Monde” was the reply.

Did I really stumble at the largest mosque in the world? Alas, my informants were not entirely reliable. Hassan II Mosque It may have one of the world’s biggest minarets, but it is not the biggest in itself. And when the tour bus around the corner proves, it is Casablanca’s main attraction.

I understand why the boy exaggerated it. With the ability of 25,000 people, the mosque is designed not only to its size, but also to be respectful. Every centimeter is covered in intricate craftsmanship, from plaster work to mosaics and fretwork. At the attached museum, I learned that 12,000 artisans were required to complete it.

My walks have brought more discoveries. Downtown streets lined with Art Deco buildings. Elegant modern Moroccan art Villa de Arts; Abderrahman slaoui There is a museum, Berber gems and colonial travel posters.

By traveling without expectation, you can also be more abiding in normal life. I loved coming across a square man selling coffee from a small pot. Then the desperate woman from Zigella Bass scrambled to get an air fryer that had just been on sale.

Casablanca wasn’t working hard for tourists. It was busy living my life.

We found a second hotel on the streets of the villa decorated with bougainvillea. Room Doge (approximately 2,200 Dirhams) once in a private home, leaning hard against the origins of the jazz era, featuring velvet-lined walls and at least one photo of Josephine Baker. Staying there in inlay furniture and orange flower scented soap, I tried not to wonder if there was even a more exquisite Casablanca hotel It wasn’t Found.

Unplugged travel means letting go of the fear of missing out. The Internet can convince us that its best list is objective truth and that fewer travelers have settled down because they do not pass through them.

I had to fight the sparkle in the central market. There, dozens of seafood stalls served fresh oysters and fish tagin. How to choose? Thanks to the local businessman, I settled in Nadia. Did the juicy grilled sardines drizzle with the charming chelmoura sauce? They were the best I had.

The same applies to perfectly spice chicken shawarma sampled in the upscale Racine district, and delicate gazelle horn pastries at bakeries in Gautier Quarter.

However, that strategy did not work in the quest for sit-in restaurants serving traditional Moroccan food. Because local diners choose different dishes than what they get at home. So when I came in Le Quistot I’ve heard the tiled dining room and Castilian Spanish, British English and New Jersey accents, but I didn’t have high hopes.

However, my couscous tfaya was fluffy, the vegetables were flavorful, and the caramelized onions and almonds added just the right amount of sweetness and crunch. When chef and owner Aziz Berada said his couscous was the best in Casablanca, I believed him.

If so, it was one of his talents. Before Aziz became a chef, he told me, he was a photographer of King Hassan II, the same monarch who ordered the construction of the impressive mosque. When the monarch died, Aziz decided it was time for a career change.

My conversation with Aziz – It didn’t happen if he was buried on the phone while eating, but I wanted to see the palace where he worked. On my last day, the Doge receptionist printed yet another Google Map.

That’s when I got lost. After no help from the soda drinking teenager, I wandered the block and finally asked for instructions from an older man pointing to the far-flung red flag: the palace.

That was not the only thing that was open to the public. clearly.

The internet would have made this clear. But when I tackled the realization that I had spent hours reaching those mysterious walls, I spied on the streets lined with bookstores. At least I thought I might find a decent map.

And I did. But the streets also sold shops selling hand-woven rugs and copper tea sets, courtyards filled with olive barrels, and even before I came across a small museum of Andalusian instruments, they sold warrens in whitewashed alleys that reminded me of Andalusia.

Designed by the French in the 1920s and 30s, the habous neighborhood looked like a Moroccan stage set.

I learned this from a woman who introduced herself as Iman when I stopped for mint tea at Imperial Cafe. Salutes from passersby were frequently made as she sat near me and appeared to be either a celebrity or mayor. I asked if I could talk to her about the neighborhood.

“Of course, lover,” she said in perfect English. “I love Americans. You’re very spontaneous.”

Iman suggested moving the conversation to a nearby location. I think I might overcome my skepticism and get local recommendations.

As we walked, Iman’s Rapid Fire Monologue left a small space to ask about her favorite restaurant. However, I learned that she once lived in the US, sold real estate, worked for a jewelry company, and drove an Uber.

Finally, we arrived at a wall that was slightly less than the set of palaces. The guards led us through doors carved into a gorgeous building with green and blue geometric tiles and intricate plasterwork walls and courtyards dotted with orange trees. I still didn’t know where I was (later I learned that it was Pasha’s former court and residence, and is now used for cultural events). And I was given a mystery to staff, including a bureaucrat with a stern look on my face and a cleaning lady who effectively greeted Iman.

Who is It was Iman? Politician

Source: www.nytimes.com

Investigation Launched into Online Suicide Forum in Response to UK Digital Safety Act

UK Communications Regulators have announced the first investigation under the new Digital Safety Act, with an investigation into an online suicide forum.

Ofcom is investigating whether the site has violated the Online Safety Act by failing to take appropriate measures to protect users from illegal content.

The law requires tech platforms to tackle illegal material, such as promoting suicide, or face the threat of fines up to £18 million or 10% of global revenue. In extreme cases, Ofcom also has the power to block access to UK sites or apps.

Ofcom said it didn’t name the forum under investigation, focusing on whether the site has taken appropriate steps to protect users in the UK, whether it failed to complete an assessment of harm that could be requested under the law, and whether it responded appropriately to requests for information.

“This is the first investigation open to individual online service providers under these new laws,” Ofcom said.

The BBC was reported in 2023 The easy-to-access forum for anyone on the open web has led to at least 50 deaths in the UK, with tens of thousands of members with debate, including methods of suicide.

Last month, the obligation came into effect under a law requiring 100,000 services under that range, from small sites to large platforms such as X, Facebook and Google. This Act contains 130 “priority violations” or illegal content. This should be addressed as a priority by ensuring that a moderation system is set up to address such material.

“We were clear… we may not comply with the new online safety obligation or we may not be able to properly respond to information requests, leading to enforcement action and we will not hesitate to take prompt action suspecting there is a serious violation,” Ofcom said.

Skip past newsletter promotions

In the UK and Ireland, Samaritans can be contacted on Freephone 116 123 or emailed to jo@samaritans.org or jo@samaritans.ie. In the US, connect with crisis counselors by calling or texting the 988 National Suicide Prevention Lifeline, chatting at 988lifeline.org, or texting 741741 text. In Australia, the Crisis Support Service Lifeline is 13 11 14.

Source: www.theguardian.com

Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

Strictly Necessary Cookies

Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.