The evolving experience of young people on the internet.
Linda Raymond/Getty Images
In 2025, numerous countries will implement new internet access restrictions aimed at protecting children from harmful content, with more expected to follow in 2026. However, do these initiatives genuinely safeguard children, or do they merely inconvenience adults?
The UK’s Online Safety Act (OSA), which took effect on July 25, mandates that websites prevent children from accessing pornography or content that promotes self-harm, violence, or dangerous activities. While intended to protect, the law has faced backlash due to its broad definition of “harmful content,” which resulted in many small websites closing down as they struggled to meet the regulatory requirements.
In Australia, a new policy prohibits those under 16 from using social media, even with parental consent, as part of the Online Safety Amendment (Social Media Minimum Age) Act 2024. This legislation, effective immediately, grants regulators the authority to impose fines up to A$50 million on companies that fail to prevent minors from accessing their platforms. The European Union is considering similar bans. Meanwhile, France has instituted a law requiring age verification for websites with pornographic material, facing protests from adult website operators.
Concerns surrounding the use of technology for age verification are growing, with some sites utilizing facial recognition tools that can be tricked with screenshots of video game characters. Moreover, VPNs allow users to masquerade as being from regions without strict age verification requirements. Following the onset of the OSA, search attempts for VPNs have surged, with reports indicating as much as a 1800% increase in daily registrations following the law’s implementation. The most prominent adult site experienced a 77% decline in UK visitors in the aftermath of the OSA, as users changed their settings to appear as if they were located in countries where age verification isn’t enforced.
The Children’s Commissioner for England emphasized that these loopholes need to be addressed and has made recommendations for age verification measures to prevent children from using VPNs. Despite this, many argue that such responses address symptoms rather than the root of the problem. So, what is the appropriate course of action?
Andrew Coun, a former member of Meta and TikTok’s safety and moderation teams, opines that harmful content isn’t deliberately targeted at children. Instead, he argues that algorithms aim to maximize engagement, subsequently boosting ad revenue. This creates skepticism regarding the genuine willingness of tech companies to protect kids, as tighter restrictions could harm their profits.
“It’s exceedingly unlikely that they will prioritize compliance,” he remarked, noting the inherent conflict between their interests and public welfare. “Ultimately, profits are a primary concern, and they will likely fulfill only the minimum requirements to comply.”
Graham Murdoch, a researcher at Loughborough University, believes the surge in online safety regulations will likely yield disappointment, as policymaking typically lags behind the rapid advancements of technology firms. He advocates for the establishment of a national internet service complete with its own search engine and social platforms, guided by a public charter akin to that of the BBC.
“The Internet should be regarded as a public service because of the immense value it offers to everyday life,” Murdoch stated. “We stand at a pivotal moment; if decisive action isn’t taken soon, returning to our current trajectory will be impossible.”
India’s telecom ministry has officially requested smartphone manufacturers to pre-install state-owned cybersecurity applications on all new devices, which cannot be removed. This directive is likely to generate criticism from Apple and privacy advocates, according to a government order.
In light of the rising incidents of cybercrime and hacking, India is collaborating with international authorities, including those in Russia, to enforce new regulations that aim to prevent the misuse of stolen mobile phones for fraudulent activities or the promotion of government service applications.
Apple has historically been at odds with telecom regulators regarding the development of government anti-spam mobile applications; however, manufacturers such as Samsung, Vivo, Oppo, and Xiaomi are obliged to comply with the recent mandate.
According to the order issued on November 28, established smartphone brands have 90 days to ensure that the government’s Sanchar Saathi application is pre-installed on new devices, with users unable to disable the app.
For phones already present in the supply chain, manufacturers are required to roll out app updates to the devices, as stated in an unpublished order sent privately to certain companies.
However, a technology law expert expressed concerns regarding this development.
“The government has effectively stripped user consent of its significance,” stated Mishi Chaudhary, an advocate for internet rights.
Privacy advocates have criticized a similar request made by Russia in August, which mandates the pre-installation of the state-backed Max messaging app on mobile devices.
With over 1.2 billion subscribers, India stands as one of the largest smartphone markets. Since its launch in January, the app has reportedly helped recover more than 700,000 lost phones, including 50,000 in October alone, according to government data.
The government asserts that the app is vital in addressing “serious risks” to communication cybersecurity posed by duplicate or spoofed IMEI numbers, which facilitate fraud and network exploitation.
Counterpoint Research anticipates that by mid-2025, 4.5% of the expected 735 million smartphones in India will operate on Apple’s iOS, while the remaining devices will run Android.
Although Apple preinstalls its own applications, its internal policies bar the installation of government or third-party applications prior to sale, according to a source familiar with the situation.
“Apple has a history of denying such governmental requests,” remarked Tarun Pathak, a research director at Counterpoint.
“It’s probable that we will pursue a compromise. Instead of mandating pre-installation, we may opt to negotiate and encourage users to install the application voluntarily.”
Apple, Google, Samsung, and Xiaomi did not respond to inquiries for comment. Likewise, India’s Ministry of Telecommunications has not issued a response.
The International Mobile Equipment Identity (IMEI), a unique identifier consisting of 14 to 17 digits for each mobile device, is predominantly used to revoke network access for phones reported as stolen.
The Sanchar Saathi application is principally developed to assist users in blocking and tracking lost or stolen smartphones across various networks via a centralized registry. It also aids in identifying and disconnecting unauthorized mobile connections.
Since its launch, the app has achieved over 5 million downloads, successfully blocked more than 3.7 million stolen or lost phones, and prevented over 30 million unauthorized connections.
The government claims that the software will contribute to mitigating cyber threats, facilitate the tracking and blocking of lost or stolen mobile phones, assist law enforcement in device tracking, and help curtail the entry of counterfeit products into illicit markets.
Privacy Notice: Newsletters may include information about charities, online advertising, and externally funded content. For more details, see our Privacy Policy. To protect your website and Google, we use Google Recaptcha Privacy Policy and terms of service Apply.
After the newsletter promotion
Meanwhile, TikTok’s business is thriving. Accounts filed with Companies House reveal that combined operations in the UK and Europe reached $6.3 billion (£4.7 billion) in 2024, representing a 38% increase from the year before. The operating loss decreased from $1.4 billion in 2023 to $485 million.
A TikTok spokesperson stated that the company is “continuing the reorganization initiated last year to enhance its global operational model for reliability and safety.” This involves a focus on fewer global locations to increase efficiency and speed in the evolution of this essential function for technological progress.
Driverless taxis, which have disrupted industries in various US and Chinese cities, are now on their way to London.
As a cyclist, Londoner, and journalist who has closely observed AI developments, I find myself somewhat anxious. Yet, considering the frequent encounters with careless human drivers in London, part of me feels cautiously hopeful.
Ultimately, the question arises: is it preferable to navigate the roads among tired, distracted, and irate humans, or to coexist with potentially erratic AI?
The UK government has affirmed plans for companies like Uber to launch pilot programs featuring self-driving “taxi and bus-like” services in 2026. Following that, in the latter half of 2027, automated vehicle legislation is expected to take effect, establishing a formal legal framework for the industry. Crucially, this law places accountability for accidents on the automakers rather than local residents.
Officials advocate that unmanned vehicles could enhance road safety, given that human error is responsible for 88% of all traffic accidents. The statistics are staggering: London highways reported 130 fatalities last year, which included 66 pedestrians and 10 cyclists. Globally, 1.2 million individuals die in traffic incidents annually.
As someone who cycles often in London, I have firsthand experience with the challenges posed by reckless driving. I’ve witnessed drivers engaging in a variety of distractions, from eating breakfast to watching movies. I have been rear-ended at red lights at least four times. While it is commonly said that AI lacks creativity, humans have certainly mastered the art of poor driving.
In contrast, AI isn’t swayed by distractions such as text messages, alcohol, or fatigue. With numerous sensors, machines lack blind spots and always check their surroundings before making a turn.
Admittedly, there have been alarming reports of autonomous vehicles failing to stop and causing harm to pedestrians. These incidents garner significant media attention. However, considering the numerous fatalities attributed to human drivers, the statistics of road deaths paint a less sensational picture. In the UK, more than four people die daily in traffic accidents.
The safety concerns surrounding autonomous vehicles are complex. While I believe that every road fatality is unacceptable, there exists a compelling argument that if AI can travel the same distance with fewer casualties, it shouldn’t be demonized in the pursuit of progress.
“
I have doubts about whether self-driving cars can differentiate pedestrians from shadows. “
Research indicates that driverless cars often outperform human-driven vehicles in terms of safety, although this advantage may not consistently hold in urban environments, particularly under poor lighting or during complex maneuvers.
These vehicles depend on technology companies to ensure their safety, raising questions about potential conflicts of interest relating to profit versus safety. We have already seen concerning suggestions to equip pedestrians with electronic sensors to enhance their visibility to these machines.
When it comes to cyclists, can tech companies ensure they maintain a 1.5-meter buffer when a robocar passes, or will they simply prioritize not hitting cyclists? The latter might streamline urban travel times, but could pose risks to vulnerable cyclists. Furthermore, to what extent will autonomous vehicles pause to allow pedestrians to fully cross the street, or will they encourage hurried crossings? These parameters can be influenced, and there are inevitably tensions between safety and travel efficiency.
Even if a company aims to act benevolently, AI systems are inherently unpredictable. Just as chatbots can suggest erroneous ingredients, self-driving cars cannot guarantee they won’t misinterpret a pedestrian as a shadow. It’s an unsettling truth.
Personally, I harbor reservations about AI operating vehicles in my vicinity, just as I do about human drivers. However, while human capabilities can improve with time and effort, AI has the potential for rapid advancement. The roll-out of automated taxis in London could provide invaluable data that enhances the safety of our roads. Ultimately, if given a choice, I would prefer an AI driver.
Nevertheless, the stark reality remains: a few tons of steel on four wheels—combined with high-tech systems—will never constitute a wholly safe or efficient urban transport solution. Self-driving taxis may mirror today’s human-operated models, ultimately not resolving London’s transport challenges.
Electric bikes and dedicated cycle lanes are environmentally friendly and often more efficient for city travel, while buses can accommodate multiple passengers, utilizing the space of two SUVs. However, such solutions may not yield substantial profits for big tech companies, will they?
Matt Week
What I’m reading
How music works by Talking Heads frontman David Byrne.
What I’m seeing
Horror movies I’ll bring her back (It’s true, through the hands that cover their eyes at the moment).
What I’m working on
Next spring, I plan to plant various cuttings in my garden to fill empty spaces.
Matt Sparkes is a technology reporter for New Scientist
Research conducted among English children has revealed a rise in exposure to pornography following the implementation of UK regulations intended to safeguard them online, with six-year-olds encountering it inadvertently.
Dame Rachel de Souza reported that the findings indicated an uptick in the number of young people encountering pornographic content before turning 18, even after the Online Safety Law came into effect.
Over a quarter (27%) admitted to having viewed porn online by the age of 11.
These results build on a similar survey carried out by the Children’s Commissioner in 2023, highlighting minimal progress despite newly instituted laws and commitments from government officials and tech companies.
She stated: “Violent pornography is readily accessible to children, often encountered accidentally via popular social media platforms, and has a profound impact on their behaviors and views.
“This report should signal a clear turning point. The fresh protections introduced in July by Ofcom, part of the Online Safety Act, present a genuine opportunity to prioritize child safety unequivocally in the online space.”
The findings stem from a representative national survey conducted in May with 1,010 children and young people aged 16-21, just prior to the implementation of the OFCOM child code in July.
The regulations set forth by Ofcom have brought significant changes designed to restrict access to pornographic websites for those under 18. Utilizing the same methodology and questions as in the 2023 survey ensures consistency:
A higher percentage of young people reported seeing porn before age 18 (70%) in 2025 compared to 2023 (64%).
More than a quarter (27%) acknowledged viewing porn online at age 11, with the average age of first exposure remaining at 13.
Vulnerable children, including those receiving free school lunches, children in social care, and those with special educational needs or disabilities, reported higher rates of exposure to online porn by age 11 compared to their peers.
Nearly half of the respondents (44%) agreed with the statement: “Girls might say no at first, but then they could be persuaded to have sex.” Further analysis showed that 54% of girls and 41% of boys who had viewed porn online resonated with this sentiment, in contrast to 46% of girls and 30% of boys who hadn’t.
A significant number of respondents indicated they encountered porn online accidentally rather than actively seeking it (35%). The rate of accidental exposure rose by 21 percentage points compared to 2023 (59% vs. 38%).
Social networking and media platforms constituted 80% of the primary sources of porn access for children, with X (formerly Twitter) being the most common portal, surpassing dedicated porn sites.
The disparity between the number of children viewing porn on X versus dedicated porn sites has widened (45% vs. 35% in 2025 compared to 41% vs. 37% in 2023).
Most respondents reported witnessing portrayals of actions which are illegal under existing pornography legislation or could be deemed illegal under forthcoming crimes and police bills.
Over half (58%) encountered pornographic content that depicted strangulation, with 44% observing sexual activity while individuals were asleep, and 36% witnessing instances where consent was not given or had been ignored.
Further scrutiny revealed that only a minority of children expressed a desire for violent or extreme content, indicating it is being made available to them.
The report highlights concerns that, even under current regulations, children may circumvent restrictions by utilizing virtual private networks (VPNs), which remain legal in the UK.
The report advocates for online porn to adhere to the same standards as offline porn, prohibiting depictions of non-fatal violence. It also calls for the Ministry of Education to equip schools to effectively implement new curricula on relationships, health, and sex education.
Recently, it was announced that traffic to the UK’s leading porn sites has drastically decreased following the strengthening of age verification measures. According to data analytics firm Simarweb, the popular adult site Pornhub saw a decline of over 1 million visitors within just two weeks.
Pornhub and other major adult platforms initiated enhanced age verification checks on July 25 after acknowledging that online safety laws should complicate access to explicit materials for individuals under 18.
Simarweb compared the average daily user statistics of porn sites from August 1 to 9 against the average from July, revealing that Pornhub, the UK’s top adult content site, experienced a 47% dip in domestic traffic on July 24, the day before the new regulations came into effect.
A government spokesperson remarked, “Children are growing up immersed in a digital landscape bombarded with pornography and harmful content, which can have damaging effects on their lives. Online safety laws are addressing this issue.”
“To be clear: VPNs are legitimate tools for adults, and there are no intentions to ban them. However, platforms promoting loopholes like VPNs to children could face stringent enforcement and hefty fines. We mustn’t prioritize business interests over child safety.”
Social media platforms continue to disseminate content related to depression, suicide, and self-harm among teenagers, despite the introduction of new online safety regulations designed to safeguard children.
The Molly Rose Foundation created a fake account pretending to be a 15-year-old girl and interacted with posts concerning suicide, self-harm, and depression. This led to the algorithm promoting accounts filled with a “tsunami of harmful content on Instagram reels and TikTok pages,” as detailed in the charity’s analysis.
An alarming 97% of recommended videos viewed on Instagram reels and 96% on TikTok were found to be harmful. Furthermore, over half (55%) of TikTok’s harmful recommended posts included references to suicide and self-harm, while 16% contained protective references to users.
These harmful posts garnered substantial viewership. One particularly damaging video was liked over 1 million times on TikTok’s For You Page, and on Instagram reels, one in five harmful recommended videos received over 250,000 likes.
Andy Burrows, CEO of The Molly Rose Foundation, stated: “Persistent algorithms continue to bombard teenagers with dangerous levels of harmful content. This is occurring on a massive scale on the most popular platforms among young users.”
“In the two years since our last study, it is shocking that the magnitude of harm has not been adequately addressed, and that risks have been actively exacerbated on TikTok.
“The measures instituted by Ofcom to mitigate algorithmic harms are, at best, temporary solutions and are insufficient to prevent preventable damage. It is crucial for governments and regulators to take decisive action to implement stronger regulations that platforms cannot overlook.”
Researchers examining platform content from November 2024 to March 2025 discovered that while both platforms permitted teenagers to provide negative feedback on content, as required by Ofcom under the online safety law, this function also allowed for positive feedback on the same material.
The Foundation’s Report, developed in conjunction with Bright Data, indicates that while the platform has made strides to complicate the use of hashtags for searching hazardous content, it still amplifies harmful material through personalized AI recommendation systems once monitored. The report further observed that platforms often utilize overly broad definitions of harm.
This study provided evidence linking exposure to harmful online content with increased risks of suicide and self-harm.
Additionally, it was found that social media platforms profited from advertisements placed next to numerous harmful posts, including those from fashion and fast food brands popular among teenagers as well as UK universities.
Ofcom has initiated the implementation of child safety codes in accordance with online safety laws aimed at “taming toxic algorithms.” The Molly Rose Foundation, which receives funding from META, expresses concern that regulators propose a mere £80,000 for these improvements.
A spokesperson for Ofcom stated, “Changes are underway. Since this study was conducted, new measures have been introduced to enhance online safety for children. These will make a significant difference, helping to prevent exposure to the most harmful content, including materials related to suicide and self-harm.”
Technology Secretary Peter Kyle mentioned that 45 sites have been under investigation since the enactment of the online safety law. “Ofcom is also exploring ways to strengthen existing measures, such as employing proactive technologies to protect children from self-harm and recommending that platforms enhance their algorithmic safety,” he added.
A TikTok spokesperson commented: “TikTok accounts for teenagers come equipped with over 50 safety features and settings that allow for self-expression, discovery, and learning while ensuring safety. Parents can further customize content and privacy settings for their teens through family pairing.”
A Meta spokesperson stated: “I dispute the claims made in this report, citing its limited methodology.
“Millions of teenagers currently use Instagram’s teenage accounts, which offer built-in protections that limit who can contact them, the content they can see, and their time spent on Instagram. Our efforts to utilize automated technology continue in order to remove content that promotes suicide and self-harm.”
Angela Rayner has stated that Nigel Farage has “failed a generation of young women” with his plan to abolish online safety laws, claiming it could lead to an increase in “revenge porn.”
The Deputy Prime Minister’s remarks are the latest in a series of criticisms directed at Farage by the government, as Labour launches a barrage of attack ads targeting British reform leaders, including one featuring Farage alongside influencer Andrew Tate.
During a press conference last month, reform leaders announced initiatives that encourage social media companies to restrict misleading and harmful content, vowing not to promote censorship and avoiding the portrayal of the UK as a “borderline dystopian state.”
In retaliation, Science and Technology Secretary Peter Kyle accused Farage of siding with child abusers like Jimmy Savile, prompting a strong backlash from reform leaders.
In comments made to the Sunday Telegraph, Rayner underscored the risks associated with abolishing the act, which addresses what is officially known as intimate image abuse.
“We recognize that the abuse of intimate images is an atrocity, fostering a misogynistic culture on social media, which also spills over into real life,” Rayner articulated in the article.
“Nigel Farage poses a threat to a generation of young women with his dangerous and reckless plans to eliminate online safety laws. The absence of a viable alternative to abolish safety measures and combat the forthcoming flood of abuse reveals a severe neglect of responsibility.”
“It’s time for Farage to explain to British women and girls how he intends to ensure their safety online.”
Labour has rolled out a series of interconnected online ads targeting Farage. An ad launched on Sunday morning linked directly to Rayner’s remarks, asserting, “Nigel Farage wants to make it easier to share revenge porn online,” accompanied by a laughing image of Farage.
According to the Sunday Times, another ad draws attention to Farage’s comments regarding Tate, an influencer facing serious allegations in the UK, including rape and human trafficking, alongside his brother Tristan.
Both the American-British brothers are currently under investigation in Romania and assert their innocence against numerous allegations.
Labour’s ads depict Farage alongside Andrew Tate with the caption “Nigel Farage calls Andrew Tate an ‘important voice’ for men,” referencing remarks made during an interview on last year’s Strike IT Big podcast.
Lila Cunningham, a former magistrate involved in the reform, wrote an article for the Telegraph on Saturday, labeling the online safety law as “censorship law” and pointed out that existing laws already address “revenge porn.”
“This law serves as a guise for censorship, providing a pretext to empower unchecked regulators and to silence dissenting views,” Cunningham claimed.
Cunningham also criticized the government’s focus on accommodating asylum seekers in hotels, emphasizing that it puts women at risk and diverting attention from more pressing concerns.
Since the implementation of stringent age verification measures last month, visits to popular adult websites in the UK have seen a significant decline, according to recent data.
Daily traffic to PornHub, the most frequented porn site in the UK, dropped by 47%, from 3.6 million on July 24 to 1.9 million on August 8.
Data from digital market intelligence firm Sircerweb indicates that the next popular platforms, Xvideos and Xhamster, also experienced declines of 47% and 39% during the same period.
As reported initially by the Financial Times, this downturn seems to reflect the enforcement of strict age verification rules commencing on July 25 under the Online Safety Act. However, social media platforms implementing similar age checks for age-restricted materials, like X and Reddit, did not experience similar traffic declines.
A representative from Pornhub remarked, “As we have observed in various regions globally, compliant sites often see a decrease in traffic, while non-compliant ones may see an increase.”
The Online Safety Act aims to shield children from harmful online content, mandating that any site or app providing pornographic material must prevent access by minors.
Ofcom, the overseeing body for this law in the UK, endorses age verification methods such as: verifying age via credit card providers, banks, or mobile network operators; matching photo ID with a live selfie; or using a “digital identity wallet” for age verification.
Additionally, the law requires platforms to block access to content that could be harmful to children, including materials that incite self-harm or promote dangerous behaviors, which has sparked tension over concerns of excessive regulation.
Ofcom contends that the law does not infringe upon freedom of expression, highlighting clauses intended to protect free speech. Non-compliance can lead to penalties ranging from formal warnings to fines amounting to 10% of global revenue, with serious violations potentially resulting in websites being blocked in the UK.
Nigel Farage’s Reform British Party has vowed to repeal the act following the age verification requirement, igniting a heated exchange where the technology secretary, Peter Kyle, was accused by Farage of making inappropriate comments.
The implementation of age checks has accordingly led to a surge in virtual private network (VPN) downloads, as users seek to circumvent national restrictions on certain websites. VPN applications frequently dominate the top five spots in Apple’s App Store.
Wikimedia operators have received approval from a High Court judge to contest the online safety legislation when deemed a high-risk platform, which imposes the most stringent requirements.
The Wikimedia Foundation warns that if OFCOM classifies it as a Category 1 provider later this summer, it will be compelled to limit access to the site in order to meet regulatory standards.
As a nonprofit entity, the organization stated it “faces significant challenges in addressing the substantial technical and staffing demands” required to adhere to its obligations, which include user verification, stringent user protection measures, and regular reporting responsibilities to mitigate the spread of harmful content.
The Wikimedia Foundation estimates that to avoid being categorized as a Category 1 service, the number of UK users accessing Wikipedia would need to decrease by approximately three-quarters.
Wikipedia asserts it is unlike other platforms expected to be classified as Category 1 providers, such as Facebook and Instagram, due to its charitable nature and the fact that users typically interact only with content that interests them.
Judge Johnson declined to challenge Wikipedia’s status in court for various reasons but emphasized that the site “offers tremendous value for freedom of speech and expression,” noting that the verdict would not provide Ofcom or the government a mandate to impose regulations that would severely limit Wikipedia’s operations.
He stated that the classification of Wikipedia as a Category 1 provider “must be justified as proportionate if it does not infringe upon the right to freedom of expression,” but added that it was “premature” to enforce such a classification as Ofcom had not yet determined it to be a Category 1 service.
Should Ofcom deem Wikipedia a Category 1 service, which would jeopardize its current operations, Johnson suggested that technology secretary Peter Kyle “should consider altering the regulations or exempting this category of services from the law,” highlighting that Wikipedia could confront further challenges if this were not addressed.
“We are pleased to report that we are actively engaging with the Wikimedia Foundation,” said Phil Brad Leishmieg, lead attorney for the organization. “While the ruling does not provide immediate legal protection for Wikipedia as we had sought, it accentuates the responsibilities facing Ofcom and the UK government regarding the implementation of the Online Safety Act.”
“The judge has recognized the issues caused by the misalignment of OSA classifications and obligations concerning Wikipedia’s ‘significant value, user safety, and the human rights of Wikipedia volunteer contributors.’
Government KC Cecilia Aibimee stated that the minister has taken OFCOM’s guidance into account, specifically considering whether Wikipedia should be exempt from the regulations, but ultimately decided against it. She remarked that Wikipedia was deemed “in principle an appropriate service necessitating Category 1 obligations,” and that the reasoning behind this decision was “neither unreasonable nor without justification.”
A government representative commented:“We are pleased with today’s High Court ruling. This will assist us in our ongoing efforts to implement online safety laws and foster a safer online environment for all.”
The UK’s new online safety laws are generating considerable attention. As worries intensify about the accessibility of harmful online content, regulations have been instituted to hold social media platforms accountable.
However, just days after their implementation, novel strategies for ensuring children’s safety online have sparked discussions in both the UK and the US.
Recently, Nigel Farage, leader of the Populist Reformed British Party, found himself in a heated exchange with the government’s Minister of Labour after announcing his intent to repeal the law.
In parallel, Republicans convened with British lawmakers and the communications regulator Ofcom. The ramifications of the new law are also keenly observed in Australia, where plans are afoot to prohibit social media usage for those under 16.
Experts note that the law embodies a tension between swiftly eliminating harmful content and preserving freedom of speech.
Senior Reformer Zia Yusuf stated:
Responding to criticisms of UK legislation, technical secretary Peter Kyle remarked, “If individuals like Jimmy Saville were alive today, they would still commit crimes online, and Nigel Farage claims to be on their side.”
Kyle referred to measures in the law that would help shield children from grooming via messaging apps. Farage condemned the technical secretary’s comments as “unpleasant” and demanded an apology, which is unlikely to be forthcoming.
“It’s below the belt to suggest they’ll do anything to assist individuals like Jimmy Saville while causing harm,” Farage added.
The UK’s rights are not the only concerns raised about the law. US Vice President JD Vance claimed that freedom of speech in the UK is “retreating.” Last week, Republican Rep. Jim Jordan, who criticized the legislation, led a group of US lawmakers in discussions with Kyle and Ofcom regarding the law.
Jordan labeled the law as “UK online censorship legislation” and criticized Ofcom for imposing regulations that “target” and “harass” American companies. A bipartisan delegation also visited Brussels to explore the Digital Services Act, the EU’s counterpart to the online safety law.
Scott Fitzgerald, a Republican member of the delegation, noted the White House would be keen to hear the group’s findings.
Worries from the Trump administration have even led to threats against OFCOM and EU personnel concerning visa restrictions. In May, the State Department announced it would block entry to the US for “foreigners censoring Americans.” Ofcom has expressed a desire for “clarity” regarding planned visa restrictions.
The intersection of free speech concerns with economic interests is notable. Major tech platforms including Google, YouTube, Facebook, Instagram, WhatsApp, Snapchat, and X are all based in the US and may face fines of up to £18 million or 10% of global revenue for violations. For Meta, the parent company of Instagram, Facebook, and WhatsApp, this could result in fines reaching $16 billion (£11 billion).
On Friday, X, the social media platform owned by self-proclaimed free speech advocate Elon Musk, issued a statement opposing the law, warning that it could “seriously infringe” on free speech.
Signs of public backlash are evident in the UK. A petition calling for the law’s repeal has garnered over 480,000 signatures, making it eligible for consideration in Congress, and was shared on social media by far-right activist Tommy Robinson.
Tim Bale, a political professor at Queen Mary University in London, is skeptical about the law being a major voting issue.
“No petition or protest has significant traction for most people. While this resonates strongly with those online—on both the right and left—it won’t sway a large portion of the general populace,” he said.
According to a recent Ipsos Mori poll, three out of four UK parents are worried about their children’s online activities.
Beavan Kidron, a British fellow and prominent advocate for online child safety, shared with the Guardian that he is “more than willing to engage Nigel Farage and his colleagues on this issue.”
“If companies focus on targeting algorithms toward children, why would reforms place them in the hands of Big Tech?”
The UK’s new Under-18 guidelines, which prompted the latest legislation, mandate age verification on adult sites to prevent underage access. However, there are also measures to protect children from content that endorses suicide, self-harm, and eating disorders, as well as curtail the circulation of materials that incite hatred or promote harmful substances and dangerous challenges.
Some content falls within age appropriateness to avoid being flagged as violating these regulations. In an article by the Daily Telegraph, Farage alleged that footage of anti-immigrant protests was not only “censored” but also related to the Rotherham Grooming Gang scandal.
These instances were observed on X, which flagged a speech by Conservative MP Katie Lamb regarding the UK’s child grooming scandal. The content was labeled with a notice stating, “local laws temporarily restrict access to this content until X verifies the user’s age.” The Guardian could not access the Age Verification Service on X, suggesting that, until age checks are fully operational, the platform defaults many users to a child-friendly experience.
X was contacted for commentary regarding age checks.
On Reddit, the Alcohol Abuse Forum and the Pet Care subforum will implement age checks before granting access. A Reddit spokesperson confirmed that this age check is enforced under the online safety law to limit content that is illegal or harmful to users under the age of 18.
Big Brother Watch, an organization focused on civil liberties and privacy, noted that examples from Reddit and X exemplify the overreach of new legislation.
An Ofcom representative stated that the law aims to protect children from harmful and criminal content while simultaneously safeguarding free speech. “There is no necessity to limit legal content accessible to adult users.”
Mark Jones, a partner at London-based law firm Payne Hicks Beach, cautioned that social media platforms might overly censor legitimate content due to compliance concerns, jeopardizing their obligations to remove illegal material or content detrimental to children.
He added that the regulations surrounding Ofcom’s content handling are likely to manifest as actionable and enforceable due to the pressure to quickly address harmful content while respecting freedom of speech principles.
“To effectively curb the spread of harmful or illegal content, decisions must be made promptly; however, the urgency can lead to incorrect choices. Such is the reality we face.
The latest initiatives from the online safety law are only the beginning.
Elon Musk’s platform, X, has warned that the UK’s Online Safety Act (OSA) may “seriously infringe” on free speech due to its measures aimed at shielding children from harmful content.
The social media company noted that the law’s ostensibly protective aims are marred by the aggressive enforcement tactics of Communications Watchdog Ofcom.
In a statement shared on its platform, X remarked: “Many individuals are worried that initiatives designed to safeguard children could lead to significant violations of their freedom of expression.”
It further stated that the UK government was likely aware of the risks, having made “conscious decisions” to enhance censorship under the guise of “online safety.”
“It is reasonable to question if British citizens are also aware of the trade-offs being made,” the statement added.
The law, a point of contention politically on both sides of the Atlantic, is facing renewed scrutiny following the implementation of new restrictions on July 25th regarding access to pornography for those under 18 and content deemed harmful to minors.
Musk, who owns X, labeled the law as an “oppression of people” shortly after the enactment of the new rules. He also retweeted a petition advocating for the repeal of the law, which has garnered over 450,000 signatures.
X found itself compelled to establish age restrictions for certain content. In response, the Reformed British Party joined the outcry, pledging to abolish the act. This commitment led British technology secretary Peter Kyle to accuse Nigel Farage of aligning himself with pedophile Jimmy Saville, prompting Farage to describe the comments as “under the belt” and deserving of an apology.
Regarding Ofcom, X claimed that the regulators are employing “heavy-handed” tactics in implementing the act, characterized by “a rapid increase in enforcement resources” and “additional layers of bureaucratic surveillance.”
The statement warned: “The commendable intentions of this law risk being overshadowed by the expansiveness of its regulatory scope. A more balanced and collaborative approach is essential to prevent undermining free speech.”
While X aims to comply with the law, the threat of enforcement and penalties—potentially reaching 10% of global sales for social media platforms like X—could lead to increased censorship of legitimate content to avoid repercussions.
The statement also referred to plans for a National Internet Intelligence Research Team intended to monitor social media for indications of anti-migrant sentiments. While X suggested the proposal could be framed as a safety measure, it asserted that it “clearly extends far beyond that intention.”
“This development has raised alarms among free speech advocates, who characterize it as excessively restrictive. A balanced approach is essential for safeguarding individual freedoms, fostering innovation, and protecting children.”
A representative from Ofcom stated that the OSA includes provisions to uphold free speech.
They asserted: “Technology companies must address criminal content and ensure children do not access defined types of harmful material without needing to restrict legal content for adult users.”
The UK Department of Science, Innovation and Technology has been approached for comment.
Recent statistics indicate that since the implementation of age verification for pornographic websites, the UK is conducting an additional five million online age checks daily.
The Association of Age Verification Providers (AVPA) reported a significant increase in age checks across the UK since Friday, coinciding with the enforcement of mandatory age verification under the Online Safety Act.
“We are thrilled to assist you in maximizing your business potential,” remarked Iain Corby, executive director of AVPA.
In the UK, the use of virtual private networks (VPNs), which allow users to bypass restrictions on blocked sites, is rapidly increasing as they mask users’ actual locations. Four of the top five free applications in the UK Apple Download Store are VPNs, with popular provider Proton reporting an astonishing 1,800% surge in downloads.
Last week, Ofcom, the UK communications regulator, indicated it may initiate a formal inquiry into the inadequate age checks reported this week. Ofcom stated it will actively monitor compliance with age verification requirements and may investigate specific services as needed.
AVPA, the industry association representing UK age verification companies, has been assessing the checks performed on UK porn providers, which were mandated to implement “very effective” age verification by July 25th.
Companies that verified ages were instructed to report “the number of checks conducted today for a very effective age guarantee.”
While the AVPA stated it couldn’t provide a baseline for comparison, it noted that effective age verification measures are newly introduced to dedicated UK porn sites, which previously only required a confirmation check for age.
An Ofcom spokesperson said: “Until now, children could easily stumble upon pornographic and other online content without seeking it out. Age checks are essential to prevent that. We must ensure platforms are adhering to these requirements and anticipate enforcement actions against non-compliant companies.”
Ofcom stresses that service providers should not promote the use of VPNs to circumvent age management.
Penalties for breaching online safety regulations, including insufficient age verification processes, can range from 10% of global revenue to complete blockage of the site’s access in severe cases.
Age verification methods endorsed by OFCOM and utilized by AVPA members include facial age estimation, which analyses a person’s age via live photos and videos; verification through credit card providers, banks, or mobile network operators; photo ID matching, where a user’s ID is compared to a selfie; and a “digital identity wallet” containing age verification proof.
Prominent pornographic platforms, including Pornhub, the UK’s leading porn site, have pledged to adopt the stringent age verification measures mandated by the Act.
The law compels sites and applications to protect children from various harmful content, specifically material that encourages suicide, self-harm, and eating disorders. Advanced platforms must also take action to prevent the dissemination of abusive content targeting individuals with characteristics protected under equality laws, such as age, race, and gender.
Free speech advocates argue that the restrictions on child-related content have caused the classification of X-rated materials to age unnecessarily, along with several Reddit forums dedicated to discussions around alcohol abuse.
Reddit and X have been approached for their feedback.
The importance of online safety for children in the UK is reaching a pivotal moment. Starting this Friday, social media and other internet platforms must take action to safeguard children or face substantial fines for non-compliance.
This marks a critical evaluation of the online safety law, a revolutionary regulation that encompasses platforms like Facebook, Instagram, TikTok, YouTube, Google, and more. Here’s an overview of the new regulations.
What will happen on July 25th?
Companies subject to the law are required to implement safety measures that shield children from harmful content. Specifically, all pornography sites must establish stringent age verification protocols. According to Ofcom, the UK communications regulator, 8% of children aged 8 to 14 accessed online pornographic sites or apps within a month.
Furthermore, social media platforms and major search engines must block access for children to pornography and content that promotes or encourages suicide, self-harm, and eating disorders. This may involve completely removing certain feeds for younger users. Hundreds of businesses will be impacted by these regulations.
Platforms must also minimize the distribution of other potentially harmful content, such as promoting dangerous challenges, substance abuse, or instances of bullying.
What are the suggested safety measures?
Recommended measures include: Algorithms that suggest content to users must exclude harmful materials. All sites and applications must implement procedures to rapidly eliminate dangerous content. Additionally, children should have a straightforward method to report concerns. Compliance is flexible if businesses believe they have effective alternatives to meet their child safety responsibilities.
Services deemed “high risk”, like major social media platforms, must utilize “highly effective” age verification methods to identify users under 18. If a social media platform is found hosting harmful content without age checks, it is responsible for ensuring a “positive” user experience.
X states that if it cannot determine a user’s age as 18 or older, it defaults to sensitive content settings, thereby restricting adult material. They are also integrating age estimation technology and ID verification to ensure users are not underage. Meta, the parent company of Instagram and Facebook, claims to have a comprehensive approach to age verification that includes a teen account feature set by default for users under 18.
“We collaborate with the law firm Payne Hicks Beach,” noted Mark Jones, a partner at the firm. “[Online Safety Act] If not, we strive to clarify it for the company.”
The Molly Rose Foundation, set up by the family of British teenager Molly Russell, who tragically lost her life in 2017 due to harmful online content, is advocating for further changes, including the prohibition of perilous online challenges and requiring platforms to proactively mitigate depressive and body image-related content.
How will age verification be implemented?
Some age verification methods for pornographic providers supported by OFCOM include: assessing a person’s age through live photos and videos (face age estimation), verifying age via credit card, bank, or mobile network operator, matching photo ID, and utilizing a “digital identity wallet” that contains proof of age.
Ria Moody, a lawyer at Linklaters, commented, “Age verification measures must be highly accurate. OFCOM indicates these measures are ineffective unless they ensure the user is over 18, so platforms should not rely solely on them.”
What does this mean in practice?
Pornhub, the UK’s most frequented online porn site, has stated it will implement a “regulatory approved age verification method” by Friday, though specific methods have yet to be disclosed. Another adult site, OnlyFans, is already using facial age verification software, which estimates users’ ages without saving their facial images, relying instead on data from millions of other images. A company called Yoti provides this software and has also made it available on Instagram.
Last week, Reddit began verifying the age of forums and threads containing adult content. The platform utilizes technology from a company named Persona, which verifies age using uploaded selfies or government-issued ID photos. Reddit does not retain the photos, instead storing validation statuses to streamline the process for users.
How accurate is facial age verification?
The software allows websites or apps to set a “challenge” age (e.g., 20 or 25) to minimize the number of underage users accidentally accessing content. When Yoti set a challenge age of 20, less than 1% of 13-17-year-olds were mistakenly verified.
What other methods are available?
Another direct approach entails requiring users to present formal identification, like a passport or driver’s license. Importantly, the ID details need not be stored and can be used solely to verify access.
Will all pornographic sites conduct age checks?
They are expected to, but many smaller sites might try to circumvent the regulations, fearing it will deter demand for their services. Industry representatives suggest that those who disregard the rules may await Ofcom’s response to violations before determining their course of action.
How will child protection measures be enforced?
Ofcom has a broad spectrum of penalties it can impose under the law. Companies can face fines of up to £18 million or 10% of their global revenue for violations—potentially amounting to $16 billion for Meta. Additionally, sites or apps can receive formal warnings. For severe violations, Ofcom may seek a court order to restrict the availability of the site or app in the UK.
Moreover, senior managers at technology firms could face up to two years in prison if they are found criminally liable for repeated breaches of their obligations to protect children and for ignoring enforcement notices from Ofcom.
The UK’s primary media regulator has vowed to deliver a “significant milestone” in the pursuit of online safety for children, although it has cautioned that age verification measures must enforce stricter regulations on major tech firms.
Ofcom’s chief, Melanie Dawes, will unveil a new framework on Sunday. To be introduced later this month, marking a pivotal change in how the world’s largest online platforms are regulated.
However, she faces mounting pressure from advocates, many of whom are parents who assert that social media contributed to the deaths of their children, claiming that the forthcoming rules could still permit minors to access harmful content.
Dawes stated to the BBC on Sunday: “This is a considerable moment because the law takes effect at the end of the month.”
“At that point, we expect broader safeguards for children to become operational. We aim for platforms that host material inappropriate for under-18s, such as pornography and content related to suicide and self-harm, to either be removed or to implement robust age checks for those materials.”
She continued: “This is a significant moment for the industry and a critical juncture.”
Melanie Dawes (left) remarked that age checks are “a significant milestone for the industry.” Photo: Jeffover/BBC/PA
The regulations set to take effect on July 25th are the latest steps under the online safety law enacted in 2023 by the Conservative government.
The legislation was partially influenced by advocates like Ian Russell, whose 14-year-old daughter, Molly, tragically took her own life in 2017 after being exposed to numerous online resources concerning depression, self-harm, and suicide.
Minister Tory Removing certain bill sections has been criticized for potentially neglecting regulations on “legal but harmful” content in 2022.
Russell, who previously referred to the ACT as “timid,” expressed concerns regarding its enforcement by Ofcom on Sunday. He noted that while regulators allow tech companies to self-determine validation checks, they will evaluate the effectiveness of these measures.
Russell commented: “Ofcom’s public relations often portray a narrative where everything will improve soon. It’s clear that Ofcom must not only prioritize PR but must act decisively.”
“They are caught between families who have suffered losses like mine and the influence of powerful tech platforms.”
Ian Russell, a father currently advocating for child internet safety, expressed concerns about the enforcement of the law. Photo: Joshua Bratt/PA
Russell pressed Dawes to leverage her influence to urge the government for more stringent actions against tech companies.
Some critics have charged the minister with leaving substantial regulatory loopholes, including a lack of action against misinformation.
A committee of lawmakers recently asserted that social media platforms facilitated the spread of misinformation following a murder in Southport last year, contributing to the unrest that ensued. Labour MP Chi Onwurah, chair of the Science and Technology Committee, remarked that the online safety law “is unraveling.”
Dawes has not sought authority to address misinformation, but stated, “If the government chooses to broaden the scope to include misinformation or child addiction, Ofcom would be prepared to implement it.”
Nonetheless, she called out the BBC regarding their handling of Glastonbury’s coverage, questioning whether the lead singer should continue broadcasting footage of Bob Dylan’s performance amid anti-Israel chants.
“The BBC needs to act more swiftly. We need to investigate these incidents thoroughly. Otherwise, there’s a genuine risk of losing public trust in the BBC,” she stated.
a
Dele Zeynep Walton sensed something was off when she emerged from a caravan in New Forest at 8 am, camping with her boyfriend. Initially frustrated by the early start, she quickly realized the car was off course, and upon approaching, found her mother appeared “hysterical.” “Right away,” she recalls, “I thought, ‘That’s Amy.'”
Amy, Walton’s younger sister, was 21 and had been struggling with mental health issues for several months. She had a passion for music technology and art, with her stunning self-portraits adorning their family home in Southampton. A big fan of Pharrell Williams, she once received five calls to join him on stage at a concert. However, as her mental health declined, she became increasingly unreachable. “For two months, I had no idea where she was or what she was doing,” Walton says.
That October morning in 2022, Walton uncovered a devastating truth. Amy was found dead in a hotel room in Slough, Berkshire, presumed to have taken her own life. In the following days, Walton and her family would begin to understand Amy’s path—a journey facilitated by a complex web of online connections.
She loved music and art… some of Amy’s self-portraits in her family home. Photo: Peter Fluid/Guardian
Walton, a 25-year-old journalist, pieced together that Amy had engaged with a suicidal promotion forum that the Guardian opted not to name. This site is linked to at least 50 deaths
in the UK and is currently under investigation by Ofcom, a regulator under the online safety law. Police investigating Amy’s death revealed that at this forum, Amy learned how to obtain the substance that ended her life and met the man who flew to Heathrow to accompany her at the end. (He was initially charged with assisting suicide, but no further action was taken.)
Sitting in the garden of her parents’ house in Southampton, Walton describes how she came to write about the events that transpired. Her book, Logoff: Human costs in the digital world
is partly a tribute to her sister and partly an exploration of the implications of everyday web browsing, fate, and the digital world that can perpetuate harm.
“I thought: I need to dedicate myself to uncovering this. Why is the public unaware of these ongoing harms? Because they are constant.” She references Vlad Nikolin-Caisley from Southampton, saying that earlier this month, a woman was arrested
on suspicion of aiding his suicide.
With a review of Aimee’s death in June, Walton hopes that online factors will be included in the investigation and that “online harm” will be acknowledged as a cause or contributing factor in her sister’s death.
This phrase has become familiar to her. “Until I lost Amy, I didn’t understand what ‘online harm’ meant,” she reflects. She first heard the term from Ian Russell, Molly’s father and a campaigner for online safety. Molly Russell was 14 when she took her life after being exposed to images and videos of self-harm. Uniquely, the coroner stated that online activity “had contributed to her death in a minimal way.” Walton hopes a similar perspective will be taken in her sister’s case, believing that calling it “suicide” alone fails to account for the impact of the digital world and places unfair blame on Amy while leaving it unregulated.
“We can become vulnerable at any time in our lives”… Amy’s photo. Photo: Peter Fluid/Guardian
Initially labeling her sister’s death a “suicide,” Walton now feels this term no longer adequately reflects Amy’s situation. When suicide is seen as a voluntary action, how much choice does a person really have when influenced by an intentional online community? And if individuals are genuinely free to choose, Walton questions, how does the algorithm continuously presenting Amy with self-harm content shape her experience? “That’s where it becomes hard for me to label it a suicide,” Walton asserts. “My intuition tells me Amy was groomed and that her decision was not entirely hers.”
Her deep dive into these issues has transformed Walton into an activist. She collaborates with Bereaved Families for Online Safety
and serves as a young people’s ambassador for People vs Big Technology. “We must address these issues head-on,” she emphasizes. “If we don’t, it fosters the belief that online safety is solely a personal responsibility.”
Walton recounts how police indicated that the man who accompanied Amy at the hotel had shared the room for 11 days prior to her passing. The room contained Amy’s notes, but Walton mentioned they were so filled with pain that they were unreadable. He later told police that he was “working.” She reveals that the man called 999 after Amy ingested the toxic substance but declined to administer CPR. Amy has since been linked to 88 deaths in the UK and the toxic substances are purportedly sourced from Kenneth Law, a Canadian under investigation by the National Crime Agency.
A New York Times investigation revealed the forum was established by two men. Walton visited the forum herself, wanting to trace her sister’s final interactions. “Many posts essentially say, ‘Your family doesn’t care about you; you should do this.’ They phrase it, ‘When are you getting on the bus?'”
Walton views this forum as a form of radicalization towards extreme behaviors that individuals may never have contemplated. She is alarmed by the thought that the man with Amy may have been “living a twisted fantasy as an incel, where a vulnerable young woman seeks to end her life.”
Prior to Amy’s death, Walton held a neutral stance on technology. Now, she describes, “The digital world is a distorted reflection of our offline world, amplifying its dangers.” In her book, her consideration of online harm victims spans a range of experiences, from Archie Batasby, who visited TikTok on the day he suffered a life-changing brain injury, to Meareg Amare Abrha, a university professor in Ethiopia who was killed after posting provocatively on Facebook. She also contemplates Amazon workers striving for better pay and conditions, alongside “Tony,” a 90-year-old neighbor who faced digital exclusion yet taught Walton how to use smartphones.
“For too long, the facade of technology has been equated with progress and innovation, which is a notion I challenge in my book,” she asserts. She recalls infamous public figures like Zuckerberg, Cook, Pichai, Bezos, and Musk, questioning, “Where are the engineers?” and stressing the interconnectedness of these power networks.
“The campaign allows survivors to regain control”… Amy’s bedroom in her family home. Photo: Peter Fluid/Guardian
Yet, Walton sometimes describes her experience as akin to being the digital equivalent of climate scientists from the 1970s. She acknowledges that her relationship with technology is complex, much like Amy’s. Her cherished memories of playing together revolve around their family computer in their parents’ bedroom.
“Chadwick and the Despicable Egg Thief – there’s video of us playing at 3 years old. We’ve played Color Games repeatedly. I’ve been taking photos with a ‘Digicam’ since I was 8, not to mention Xbox, Nintendo, computers—all just for fun!”
In a way, Walton describes her existence as a “double life.” Her book critically examines her own habits. While writing it, she lived in tracksuits, yet none of her Instagram
posts reveal this journey. She uses the app to limit her screen time and shares TikToks about “logoff.” Video calls have also allowed her family to “grieve together” after her sister’s passing, many of whom reside in Türkiye.
Promoting her book has made it tough to detach from screens. “I feel like a hypocrite!” she admits. “My screen time this week is nine and a half hours.” A day? “I don’t like it,” she replies, “I typically average six hours.”
Ultimately, she doesn’t aim for perfection, stating, “I’m in control of it all, guys.”
In her book, Walton notes, “The campaign allows survivors to reclaim the control that was taken from them,” a sentiment that resonates with her as the process seems exhausting. “Did I say that?” she questions, surprised. “But if I hadn’t engaged in this, where would that anger go? It would consume me and make me unwell.”
She has also engaged local MPs (first Royston Smith, then Darren Puffy), and Secretary of State Peter Kyle to seek answers about what occurred with Amy. “When we discuss online safety, it’s often framed in terms of protecting children. While that’s crucial, I also represent Amy; it’s about all of us. We can become vulnerable at any stage in our lives. If we focus solely on children’s safety, we become 18 and still don’t know how to navigate a healthy digital life,” she explains.
“I feel it’s my duty to Amy since I wish I could have shielded her.” Her eyes glisten with unshed tears.
Balancing her grief with activism has proven challenging. “Some days I genuinely can’t handle it, or I just need a day in bed, as my body struggles to keep pace with all the emotional weight.”
“But this is my mission. Those in power only act if they feel the weight of this pain. If Mark Zuckerberg experienced the loss of a child due to online harm, perhaps he would finally understand, ‘Oh my God, I need to pay attention.'”
In the UK and Ireland, contact Samaritans at Freephone 116 123 or email jo@samaritans.org or jo@samaritans.ie. In the US, call or text National Suicide Prevention Lifeline at 988, chat at 988lifeline.org, or text HOME to reach a crisis counselor at 741741. Crisis Support Services in Australia can be reached at Lifeline at 13 1114. Additional international helplines are available at befrienders.org.
European officials have initiated an investigation into four adult websites suspected of inadequately preventing minors from viewing adult content.
Following a review of the companies’ policies, the European Commission criticized PornHub, StripChat, XNXX, and XVideos for not implementing adequate age verification procedures to block minors from accessing their sites.
This inquiry has been launched in accordance with the EU’s Digital Services Act (DSA), a comprehensive set of regulations aimed at curbing online harm such as disinformation, cyber threats, hate speech, and counterfeit merchandise. The DSA also enforces stringent measures to safeguard children online, including preventing mental health repercussions from exposure to adult materials.
The committee noted that all four platforms employed a simple one-click self-certification for age verification.
“Today marks a significant step toward child protection online in the EU, as the enforcement action we are initiating… clearly indicates our commitment to hold four major adult content platforms accountable for effectively safeguarding minors under the DSA.”
While no specific deadline has been set for concluding the investigation, officials stressed that they aim to act swiftly on potential next steps based on the platforms’ responses.
The platforms can resolve the investigation by implementing an age verification system recognized as effective by EU regulators. Failure to comply could result in fines of up to 6% of their global annual revenue.
The DSA regulates platforms with over 45 million users, including Google, Meta, and X, while national authorities in each of the 27 member states are responsible for those that fall beneath this threshold.
On Tuesday, the committee announced that StripChat no longer qualifies as a “very large online platform.” Following the company’s appeal, its oversight will now be handled by Cyprus rather than Brussels, under its parent company, Techinius Ltd.
However, this new designation will not take effect until September, meaning that the investigation into age verification remains active.
The child protection responsibilities of StripChat will continue unchanged.
Aylo FreeSites, the parent company of Pornhub, is aware of the ongoing investigation and has stated its “full commitment” to ensuring the online safety of minors.
“We are in full compliance with the law,” the company remarked. “We believe the effective way to protect both minors and adults is to verify user age at the point of access through their device, ensuring that websites provide or restrict access to age-sensitive content based on that verification.”
Techinius has been approached for comments. A Brussels-based attorney, recently representing the parent company of XVideos (Web Group Czech Republic) and XNXX (NKL Associates) in EU legal matters, has also reached out for statements.
The UK government has introduced an AI-driven crime prediction tool that identifies individuals deemed “high risk” for potential violence based on personal histories such as mental health and addiction, representing a controversial new development.
Meanwhile, in Argentina, authorities are launching an Artificial Intelligence Unit for Security aimed at utilizing machine learning for predicting crime and monitoring in real-time. In Canada, cities like Toronto and Vancouver employ ClearView AI’s predictive policing systems alongside facial recognition technology. In several U.S. cities, AI facial recognition is integrated with street surveillance to identify suspects.
The notion of predicting violence mimics the vision presented in Minority Report, which is compelling; however, …
Officials from federal health agencies have decided to reverse the ruling that led to the dismissal of numerous scientists at the Food Safety Labs. They are also reviewing whether other critical positions have been affected.
A representative from the Department of Health and Human Services confirmed the reinstatement of these employees and mentioned that several individuals will also be returned to the office responsible for handling freedom of information requests.
In recent months, approximately 20% of FDA positions have been cut, marking one of the most significant workforce reductions among all agencies impacted by the Trump administration.
An HHS spokesperson stated that the departures were misleading due to erroneous employment codes.
In light of contradictory statements from FDA Commissioner Dr. Marty McCurry during a recent media interview, the decision to rehire scientists researching food-related illnesses and product safety—such as infant milk powder—will follow shortly.
“You could argue that no cuts were made to scientists and inspectors,” Dr. McCurry stated during Wednesday’s CNN broadcast.
Contrarily, many scientists were laid off from food and drug safety labs nationwide, including Puerto Rico, and from the veterinary unit working on avian flu safety. Employees on leave indicated that scientists in the tobacco sector, who were let go in February, including those examining the health implications of vaping, have not been considered for paid leave and reinstatement.
It remains uncertain how many dismissed employees will be permitted to return.
According to a department spokesperson, about 40 employees from Chicago’s Moffett Lab and a lab in the San Francisco area are being offered positions. Researchers in these facilities investigate various facets of food safety, including how chemicals and bacteria permeate food packaging and methods to ensure safety for infant formula. Some scientists in Chicago have also analyzed the findings of other labs to ensure the safety of milk and seafood.
Dr. Robert Caliph, the FDA commissioner under President Joseph R. Biden, described the term “decapitation and visceral withdrawal” as fitting for the abrupt loss of agency expertise. He noted that the FDA is already behind in meetings designed to assist businesses in developing safe products.
“Much of that involves routine daily tasks that significantly affect overall safety, though they’re not particularly controversial,” he commented. “It just requires effort, and they need personnel present to carry out their duties.”
Dr. McCurry also mentioned that the layoffs do not impact product reviewers or inspectors. However, their responsibilities are being hindered by voluntary departures, the reduction of support staff, and widespread disruptions at agencies, as many are looking to exit, according to former employees.
Hundreds of drug and medical device reviewers, representing about a quarter of the agency’s workforce, have opted out of major projects. As discussed on CNBC. Under FDA Ethics Rules, staff participating in employment interviews are prohibited from conducting agency reviews on products from firms seeking employment.
Dr. Gottlieb characterized the job cuts as “deep,” impacting the Bureau of Policy’s ability to process which drug brands can be offered as low-cost generics. Approvals for generic drugs could potentially save consumers billions.
The reduction in support staff overseeing inspections at food and drug facilities abroad has raised security concerns. Many of those who lost their positions were responsible for surveillance, ensuring inspectors’ safety, especially in hostile regions.
The Communication Watchdog is accused of endorsing major technology for the safety of under-18s after England’s children’s commissioners criticized new measures to address online harm. Rachel de Souza warned Offcom last year that the proposals to protect children under online safety laws are inadequate. She expressed disappointment that the new code of practice published by WatchDog ignored her concerns, prioritizing the business interests of technology companies over child safety.
De Souza, who advocates for children’s rights, highlighted that over a million young people shared their concerns about the online world being a significant worry. She emphasized the need for stronger protection measures and criticized the lack of enhancements in the current code of practice.
Some of the measures proposed by Ofcom include implementing effective age checks for social media platforms, filtering harmful content through algorithms, swiftly removing dangerous material, and providing children with an easy way to report inappropriate content. Sites and apps covered by the code must adhere to these changes by July 25th or face fines for non-compliance.
Critics, including the Molly Rose Foundation and online safety campaigner Beavan Kidron, argue that the measures are too cautious and lack specific harm reduction targets. However, Ofcom defended its stance, stating that the rules aim to create a safer online environment for children in the UK.
The Duke and Duchess of Sussex have also advocated for stricter online protections for children, calling for measures to reduce harmful content on social media platforms. Technology Secretary Peter Kyle is considering implementing a social media curfew for children to address the negative impacts of excessive screen time.
Overall, the new code of practice aims to protect children from harmful online content, with stringent measures in place for platforms to ensure a safer online experience. Failure to comply with these regulations could result in significant fines or even legal action against high-tech companies and their executives.
As of July, social media and other online platforms must block harmful content for children or face severe fines. Online Safety Law requires tech companies to implement these measures by July 25th or risk closure in extreme cases.
The Communications Watchdog has issued over 40 measures covering various websites and apps used by children, from social media to games. Services deemed “high-risk” must implement effective age checks and algorithms to protect users under 18 from harmful content. Platforms also need to promptly remove dangerous content and provide children with an easy way to report inappropriate material.
Ofcom CEO Melanie Dawes described these changes as a “reset” for children online, warning that businesses failing to comply risk consequences. The new Ofcom code aims to create a safer online environment, with stricter controls on harmful content and age verification measures.
Additionally, there is discussion about implementing a social media curfew for children, following concerns about the impact of online platforms on young users. Efforts are being made to safeguard children from exposure to harmful content, including violence, hate speech, and online bullying.
Online safety advocate Ian Russell, who tragically lost his daughter to online harm, believes that the new code places too much emphasis on tech companies’ interests rather than safeguarding children. His charity, the Molly Rose Foundation, argues that more needs to be done to protect young people from harmful online content and challenges.
dAvid, 46-year-old father from Calgary, Canada. My 10 year old son didn’t see any problems at first I started playing on Roblox, a user-generated gaming and virtual environment platform, especially among younger gamers, which has exploded in popularity in recent years.
“We thought he was a way to maintain a level of social interaction during the blockade of the community,” David said he assumed that his son would use the platform’s chat feature to speak to friends he personally knows.
After a while, his parents found him talking to someone in his room in the middle of the night.
“We discovered that a man from India approached him and approached him with Roblox and mentored him to bypass our internet security management,” David said. “This person persuaded his son to take nude pictures and videos that he compromised and send them via Google Mini.
“It was tough to get to the root of why my son did it. I think he was lonely. I thought this was a real friend. I think he was given a gift to Roblox, who made him feel special. It was truly the worst nightmare for all parents.”
David was among parents all over the world who often shared with the Guardian that primary school children were either heavily affected or had serious harm from the games at Roblox. Many confirmed the results of reports last year that Roblox allegedly exposed children to grooming, pornography, violent content and abusive speech.
Some parents said Roblox was a creative outlet for their children and brought joy to them or improved some of their skills, such as communication and spelling, but the majority of parents who were in touch with expressed serious concern. These were primarily about the incredible levels of addiction we observed with our children, but also about extreme political images such as parental control, grooming, emotionally horrifying mail, bullying, and avatars of Nazi uniforms, as well as examples of traumatic content in games that children can access despite inappropriate talking to children on the platform.
Roblox admitted in response to the possibility that children playing on the platform could be exposed to harmful content and “bad actors.” This is an issue that the company claims to be working hard to fix it, but requires industry-wide collaboration and government intervention. The company said “I have deep sympathy.”
The newly announced, additional safety tools aimed at giving parents greater flexibility to manage their children’s activities on the site, have failed to convince many of the parents the Guardian spoke to.
“I don’t think the change will address my concerns,” said Emily, Hemel Hempstead’s mother.
“The new features are useful, but they don’t stop children from accessing inappropriate or scary content. People are allowed to choose an age rating for the game they create, and they may not always be appropriate or accurate.
Her 7-year-old daughter said that her 7-year-old daughter was asleep as she was shot after Roblox’s game took her to a room with an avatar where she was introduced as “your dad.”
Despite Roblox claiming to have introduced “new easy-to-use “remote management” parental controls,” parents found it extremely difficult to navigate parental control settings and said it takes several hours to review their child’s activities regularly. It was also impossible to tell people that many people were behind their usernames.
“Roblox monitors the type of language used, such as blasphemy, but there is no real way to policing players’ age.
The company highlighted last year that it defaulted to the fact that under 13 years of age could no longer send messages directly to others on Roblox, outside of gaming or experience.
However, Roblox admitted that it struggles to verify users’ age, saying, “age verification for users under 13 is still a challenge for the wider industry.”
Nelly*, a Dublin mother in her 40s, said she had just finished a play therapy course to process sexual content her 9-year-old daughter saw on Roblox, which caused a panic attack.
“I thought it was okay to play,” she said. “I didn’t allow her to be friends with strangers either, and I thought this would be enough, but it wasn’t.
“There was an area where she went, people were wearing underwear and someone went in and lying on her.”
Many parents felt that Roblox was exploiting his child’s “underdeveloped impulse control.” As one father said, he constantly gave them a nidge to gamble and stay on the platform, urging many children to lose interest in other activities in the real world.
Jenna, from Birmingham. Two months after her children began playing Roblox, they were able to see their “all life” [had] It is carried over by the platform and reflects the statements of other parents’ scores.
“I feel like I’m living with two addicts,” she said. “If they’re not playing, they want to watch a video about it… When they’re told to go off, it’s like you cut them off from their final fix – screaming, arguments, sometimes pure rage.”
Peter, 51, a London artist and father of three boys, said that his 14-year-old son became so engrossed in Robras and his devices that he was generally violent, breaking the windows with his fist when the game was turned off.
“People who run Roblox don’t give parents shit that they can’t control the game. We didn’t try everything. We’re in treatment now,” he said.
Roblox CEO I advised my parents To keep children away from the platform if they feel worried. Maria, a mother of three from Berkshire, felt that her children were socially excluded when they were offline, making it difficult for parents to do so, and was among many who emphasized that they had unlocked the monetization elements of the platform – the higher game levels and personalization features, becoming a status symbol between the children.
In a statement, Roblox said: “We deeply sympathize with parents who described their children’s negative experiences at Roblox, which is not something we strive for and does not reflect the online civic space we want to build for everyone.
“Ten millions have positive, rich, safe experiences at Roblox every day in a supportive environment that encourages connection with friends, learning and developing important STEM skills.
UK Communications Regulators have announced the first investigation under the new Digital Safety Act, with an investigation into an online suicide forum.
Ofcom is investigating whether the site has violated the Online Safety Act by failing to take appropriate measures to protect users from illegal content.
The law requires tech platforms to tackle illegal material, such as promoting suicide, or face the threat of fines up to £18 million or 10% of global revenue. In extreme cases, Ofcom also has the power to block access to UK sites or apps.
Ofcom said it didn’t name the forum under investigation, focusing on whether the site has taken appropriate steps to protect users in the UK, whether it failed to complete an assessment of harm that could be requested under the law, and whether it responded appropriately to requests for information.
“This is the first investigation open to individual online service providers under these new laws,” Ofcom said.
The BBC was reported in 2023 The easy-to-access forum for anyone on the open web has led to at least 50 deaths in the UK, with tens of thousands of members with debate, including methods of suicide.
Last month, the obligation came into effect under a law requiring 100,000 services under that range, from small sites to large platforms such as X, Facebook and Google. This Act contains 130 “priority violations” or illegal content. This should be addressed as a priority by ensuring that a moderation system is set up to address such material.
“We were clear… we may not comply with the new online safety obligation or we may not be able to properly respond to information requests, leading to enforcement action and we will not hesitate to take prompt action suspecting there is a serious violation,” Ofcom said.
A large donation was reportedly made to the Molly Rose Foundation by Meta and Pinterest, two major companies in the online sphere. The foundation was established as part of the Internet Safety Campaign and is named after Molly Russell, a 14-year-old who tragically took her own life in 2017 after being exposed to harmful content related to suicide and self-harm on social media platforms.
The latest annual report of the foundation mentions grants received from anonymous donors, with the stipulation that the details of the donations remain private as requested by the trustee.
According to reports from the BBC, Meta and Pinterest are believed to have made these donations starting from 2024 and are expected to continue for the foreseeable future. The exact amount of the donations has not been disclosed, but it is known that the Russell family has not received any financial compensation from the contributions.
In a statement, the Russell family expressed their commitment to utilizing the funds for the shared purpose of promoting a positive online experience for young people, as a response to Molly’s tragic passing. They clarified that they will never accept any compensation related to Molly’s death.
These donations come at a time when social media companies are facing heightened scrutiny for the impact of their platforms on the mental health of children. Meta announced significant policy changes, including the removal of fact checkers to enhance freedom of speech and reduce censorship, relying on users to report objectionable content instead.
The Molly Rose Foundation has raised concerns about the heightened risk of young people being exposed to harmful content online due to these changes. They have launched campaigns advocating for stronger online safety regulations and increased accountability for content driven by algorithms.
The charity has recently expanded its team, recruiting a CEO, two public policy managers, a communications manager, and a fundraiser in the past nine months. Molly’s father, Ian Russell, serves as the foundation’s unpaid trustee and continues to be a prominent figure in internet safety advocacy.
Both Meta and Pinterest were contacted for comments by The Guardian but have not responded at the time of reporting.
Campaigners for child safety have cautioned the government against including significant online regulations in the UK-US trade deal, labelling any potential compromise as a “disturbing betrayal” that goes against public sentiment.
The preliminary Trans-Atlantic Trade Agreement, despite objections from the White House, contains provisions to consider implementing online safety regulations, a move that could endanger freedom of speech, as reported on Thursday.
The Molly Rose Foundation, established by the relatives of Molly Russell, a British teenager who tragically ended her life after encountering harmful online content, expressed disappointment and dismay at the prospect of these regulations being used as bargaining chips in a trade agreement.
In a statement to business secretary Jonathan Reynolds, the MRF urged against continuing the troubling trend of compromising child safety.
Reports from the online newsletter Playbook revealed the commitment to enforce the Online Safety Act (OSA) alongside another law – Digital Markets, Competition and Consumer Law – with a focus on high-tech platforms.
This week, concerns were raised as the US State Department engaged with the UK communications regulator OFCOM regarding the potential impact on freedom of expression due to OSA.
The Online Safety Act is geared towards safeguarding children, mandating that individuals under 18 are shielded from harmful material like content related to self-harm and suicide. Companies found in violation of the Act can face hefty fines or service suspension in the UK.
Beevan Kidron, a crossbench peer and advocate for internet safety, criticized the Labour Party for potentially trading child safety guidelines for economic benefits. The NSPCC urged the government not to backtrack on commitments to enhance online safety for children.
When questioned in parliament about the inclusion of the Digital Safety and Competition Act and Digital Services Tax in trade discussions, the business secretary acknowledged differing opinions on issues like VAT but declined to delve into specifics. Sources close to Reynolds did not dispute the Playbook’s findings.
Peter Kyle, the Technology Secretary, affirmed the government’s stance on online security, asserting that protections for children and vulnerable individuals are non-negotiable.
A spokesperson for the prime minister reiterated the government’s steadfast position on online safety, emphasizing the importance of safeguarding children online and ensuring that illegal activities offline remain prohibited on the internet.
Health Secretary Robert F. Kennedy Jr. announced widespread cuts at federal health agencies, including the Food and Drug Administration, which eliminates overlapping services and paper pushers.
However, interviews with more than a dozen current and former FDA staff featured another photo of the widespread impact of layoffs that ultimately cut the agency’s workforce by 20%. Among them are experts who have navigated the maze of law to determine whether expensive drugs can be sold as low-cost generics. Lab scientists who tested food and drugs for contaminants or fatal bacteria. Veterinary department experts investigating avian flu infections. Researchers who monitored advertisements that were aired for false claims about prescription drugs.
In many areas of the FDA, no employee will support overseas inspectors at risk of processing their pay, submitting retirement or layoff documents, or making the most of their agency’s credit card. Even libraries of institutions that relied on subscriptions to medical journals where researchers and experts were now cancelled have been closed.
FDA’s new commissioner, Dr. Marty McCurry, appeared on Wednesday in a much-anticipated appearance at Maryland headquarters. He gave a speech outlining a wide range of issues in the health care system, including an increase in chronic diseases. Employees were not given a formal opportunity to ask questions.
Approximately 3,500 FDA employees are expected to lose employment under the cuts. A spokesman for Health and Human Services did not answer the question.
When the Trump administration ran its first round with the FDA in February, it thwarted a team of scientists who did the nuanced job of ensuring the safety of surgical robots and devices injecting insulin into diabetic children. Some of the layoffs and cuts described as arbitrary volition by former FDA officials have quickly reversed.
Dr. David Kessler, a former agent committee member on the pandemic response under President Biden and White House adviser, said the latest round of layoffs has been deprived of decades of important experience and knowledge from the institution.
“I think it’s devastating, coincidence, thoughtful and confused,” he said. “I think they need to be revoked.”
It remains uncertain whether any of the lost jobs will be restored by the regime. In the interview, 15 current and former staff members spoke on condition of anonymity, some of whom spoke and explained the expected layoffs and expected impacts on food, drugs and medical supplies, fearing unemployment or retaliation.
After facing opposition from education secretaries Peter Kyle and Bridget Phillipson, the bill seeking to ban addictive smartphone algorithms targeting young teenagers was weakened.
The Safer Phone Bill, introduced by Labour MP Josh McAllister, is set to be discussed in the Commons on Friday. Despite receiving support from various MPs and child protection charities, the government has opted to further investigate the issue rather than implement immediate changes.
Government sources indicate that the new proposal will be accepted, as the original bill put forward by McAllister did not receive ministerial support.
The government believes more time is needed to assess the impact of mobile phones on teenagers and to evaluate emerging technologies that can control the content produced by phone companies.
Peter Kyle opposes the major bill, which would have been the second online safety law some advocates were hoping for.
Although not fundamentally against government intervention on this issue, a source close to Kyle mentioned that the work is still in its early stages.
The original proposal included requirements for social media companies to exclude young teens from their algorithms and limit addictive content for those under 16. However, these measures were removed from the final bill.
Another measure to ban mobile phones in schools was also dropped after objections from Bridget Phillipson, who believes schools should self-regulate. There are uncertainties regarding potential penalties for violations.
Health Secretary Wes Streeting has been vocal about addressing the issue of addictive smartphones, publicly supporting McAllister’s bill.
The revised Private Membership Bill instructs Chief Medical Officer Chris Whitty to investigate the health impacts of smartphone use.
McAllister hopes that the bill will prompt the government to address addictive smartphone use among children more seriously, rather than just focusing on harmful or illegal content.
If the Minister commits to adopting the new measures as anticipated, McAllister will not push for a vote on the bill.
The government has pledged to “publish a research plan on the impact of social media use on children” and seek advice from the UK’s chief medical officer on parents’ management of their children’s smartphone and social media usage.
Polls indicate strong public support for measures restricting young people’s use of social media, with a majority favoring a ban on social media for those under 16.
A scientist with a Ph.D. issues tsunami alerts and serves as a Hurricane Hunting Flight Director. Researchers investigate communities that are prone to flooding during storms.
They were part of over 600 workers who were laid off last week by the Trump administration, resulting in around a 5% reduction in the workforce of the National Weather Service and the National Oceanic and Atmospheric Administration (NOAA).
Kayla Besong, a physical scientist at the Tsunami Warning Center, was one of the affected employees. She played a key role in the safety monitoring team, which was reduced from 12 members to 11. She was responsible for programming a system that assessed the risk to the U.S. coastline and issued alerts accordingly.
The layoffs have raised concerns about the impact on public safety programs and the ability to deal with the increasing frequency of weather disasters due to climate change. Last year alone, NOAA recorded a $27 billion disaster that resulted in 568 deaths in the U.S., marking the second-highest death toll since 1980, accounting for inflation.
Meteorologists are facing challenges and criticism, despite their improving accuracy in predicting weather events. The Trump administration’s decision to cut jobs at NOAA has been met with protests and legal challenges. Experts warn that these cuts threaten progress and could hinder crucial scientific advancements.
NOAA has declined to comment on the layoffs, emphasizing its commitment to providing timely information and resources to the public. Former agency officials argue that the cuts jeopardize public safety, especially during weather emergencies.
Congressional Democrats have also opposed the layoffs, citing the impact on public safety and the ability to provide accurate weather forecasts. The cuts have affected essential roles, such as hurricane modeling specialists and flight directors, who play a vital role in predicting and responding to severe weather events.
The reduction in NOAA’s workforce has sparked concerns about the agency’s ability to effectively respond to upcoming weather seasons, potentially putting lives at risk and undermining public safety efforts.
WASHINGTON – Patrick Montague, a federal firefighter investigator, was unexpectedly fired by the Trump administration on Saturday night, along with thousands of other Department of Health and Human Services employees. Patrick, 46, from Kentucky, had 26 years of experience in firefighting and prevention programs, as well as academic training and technical expertise. Despite receiving repeated praise from his supervisors, he was let go before completing his two-year probationary period due to his alleged inadequate performance.
Follow the live politics report here
Montague was part of a program aimed at reducing firefighters’ risks while on duty. Three out of the five members of his program were fired in a similar manner. The sudden layoffs were attributed to billionaire Elon Musk’s influence on cutting federal programs and reducing government workforce.
The termination of these employees, including Montague, has raised concerns about the impact on important public safety programs, such as the Fatal Firefighter Survey and Prevention Program. These programs were created to enhance the safety and well-being of firefighters across the country.
Edward Kelly, general president of the International Association of Firefighters, emphasized the importance of investing in firefighter safety programs and expressed hope that the Trump administration would prioritize such initiatives.
In addition to the firefighter safety programs, layoffs within the National Institute of Occupational Safety and Health have also affected workers responsible for maintaining the national firefighters’ cancer registry. The registry, established by a law signed by Trump in July 2018, tracks and fights cancer deaths among firefighters.
The disconnect between Trump’s public praise for firefighters and the sudden layoffs of those working on critical firefighter safety programs has left many scratching their heads. Union officials and advocates for fire safety are puzzled by the contradictory actions taken by the administration.
Despite the termination notices citing performance issues, many affected employees, like Patrick Montague, believe that their performance was satisfactory and are baffled by the decision to let them go.
wHEN Congress was postponed to the holiday in December. This is a groundbreaking bill aimed at overhauling how technology companies protect the youngest users. The Kids Online Safety Act (KOSA) introduced in 2022 was intended to be a massive calculation for Big Tech. Instead, the bill waned and died in the House despite sailing through the Senate in July with a 91-3 vote.
Kosa is passionately defended by families who say children have fallen victim to the harmful policies of social media platforms, and advocates who say bills that curb the unidentified power of big technology have been postponed for a long time is. They are seriously disappointed that a strong chance to check out Big Technology has failed due to Congress' indifference. However, human rights groups argued that the law could have led to unintended consequences that impacted freedom of speech online.
What is the Kids Online Safety Act?
Kosa was introduced nearly three years ago in the aftermath of a bomb revelation by former Facebook employee Frances Haugen, and the extent to which the social media platform's impact on younger users. Platforms like Instagram and Tiktok would have required that children be affected through design changes and address online risks to allow younger users to opt out of algorithmic recommendations.
“This is a basic product praise bill,” said Alix Fraser, director of the Council on Responsible Social Media Issues. “It's complicated because the internet is complex and social media is complex, but essentially it's just an effort to create basic product driving standards for these companies.”
The central and controversial element of the bill is its “duty of care” clause, declaring that businesses “have an obligation to use the platform to act in the best interests of minors,” and the regulatory authority It has declared it open to interpretation by They would have also requested that the platform implement measures to reduce harm by establishing “safeguards for minors.”
Critics argued that the lack of clear guidance on what constitutes harmful content encourages businesses to filter content more aggressively, resulting in unintended consequences for free speech. Delicate but important topics such as gun violence and racial justice can be considered potentially harmful and may subsequently be ruled out by the corporation itself. These censorship concerns are particularly prominent in the LGBTQ+ community, saying that opponents of the Kosa could be disproportionately affected by conservative regulators and reduce access to critical resources.
“Using Kosas we see a truly intentional but ultimately ambiguous bill that requires online services to adopt online services to take unspecified actions to keep children safe. A policy analyst at the Center for Democracy Technology, who opposes the law and receives money from technology donors such as Amazon, Google, and Microsoft.
The complex history of the Kosa
When the bill was first introduced, over 90 human rights groups signed letters against it, highlighting these and other concerns. In response to such criticism, the bill's author published a revision in February 2024. Most notably, the state attorney general changed the enforcement of its “duty of care” provisions to the Federal Trade Commission. Following these changes, many organizations, including the Glaad, the Human Rights Campaign and the Trevor project, have withdrawn their opposition, saying the amendments “significantly reduce the risk of the matter.” [Kosa] It has been misused to suppress LGBTQ+ resources and to curb young people's access to online communities. ”
However, other civil rights groups have maintained their opposition, including the Electronic Frontier Foundation (EFF), the ACLU and the future battle, calling Kosa a “censorship bill” that harms vulnerable users and freedom of speech. They argued that the duty-of-care provision could easily be weaponized by conservative FTC chairmen against LGBTQ+ youth, as well as the state attorney general. These concerns are reflected in the appointment of Republican Andrew Ferguson, Trump's FTC chairman; Who said in the leaked statement He had planned to use his role to “fight the trans agenda.”
Concerns about how Ferguson will manage online content are “what LGBTQ youth wrote and called Congress hundreds of times over the past few years in this fight,” says Saraphilips of the Future Fight. Ta. “The situation they were afraid of has come to fruition. Anyone who ignores it is really just putting their heads in the sand.”
Opponents say that even if KOSA doesn't pass, they've already achieved a calm effect on content available on certain platforms. recently Report User MAG has found that hashtags for LGBTQ+-related topics are classified as “sensitive content” and are restricted from search. Laws like Kosa, Bhatia of the Center for Democracy Technology, said it doesn't take into account the complexity of the online landscape, and it's likely that the platform will lead preemptive censorship to avoid litigation.
“Children's safety holds an interesting and paradoxical position in technology policy, where children benefit greatly from the internet, as well as vulnerable actors,” she said. . “Using policy blunt instruments to protect them can often lead to consequences that don't really take this into consideration.”
Supporters will make backlash at Kosa an aggressive lobbying from the tech industry, but fight for the future – two top opponents – EFF will be supported by large tech donors Not there. Meanwhile, the large tech companies have been split up by KOSA, with X, SNAP, Microsoft and Pinterest quietly supporting the bill, Meta and Google.
“The Kosa was a very robust law, but what's more robust is the power of big technology,” Fraser is the power of problem 1. “They hired all the lobbyists in town to take it down, and they succeeded with it.”
Fraser added that supporters are disappointed that Kosa didn't pass, but “will not take a break until federal law is passed to protect children online.”
Potential revival of Kosa
Besides Ferguson as FTC Chairman, it is unclear what the changing composition of the new Trump administration and Congress will mean for the future of Kosa. Trump has not directly expressed his views on Kosa, but some of his close circles are Revealed support After last minute amendments to the 2024 bill Promoted by Elon Musk's X.
The death of the Congress in Kosa may seem like the end of a winding and controversial path, but defenders on both sides of the fight say it's too early to write legislative obituaries.
“We shouldn't expect the Kosa to go quietly,” said Prem Trivedi, policy director at the Institute for Open Technology, which opposes Kosa. “Whether it's being reintroduced or seeing if a different incarnation is introduced, it will continue to focus more broadly on online safety for children.”
Senator Richard Blumental, who co-authored the bill with Senator Marsha Blackburn, has promised to reintroduce it in future legislative sessions, and other defenders of the bill say they won't give It’s.
“I want to talk about the worst days of their lives over and over again, in front of lawmakers, in front of staff, in front of the press, knowing something is known. I've worked with a lot of parents who think that, and to change,” Fraser said. “They don't intend to stop.”
A groundbreaking report by AI experts suggests that the risk of artificial intelligence systems being used for malicious purposes is on the rise. Researchers, particularly in DeepSeek and other similar organizations, are concerned about safety risks which may escalate.
Yoshua Bengio, a prominent figure in the AI field, views the progress of China’s DeepSeek startup with apprehension as it challenges the dominance of the United States in the industry.
“This leads to a tighter competition, which is concerning from a safety standpoint,” voiced Bengio.
He cautioned that American companies and competitors need to focus on overtaking DeepSeek to ensure safety and maintain their lead. Openai, known for Chatgpt, responded by hastening the release of a new virtual assistant to keep up with DeepSeek’s advancements.
In a wide-ranging discussion on AI safety, Bengio stressed the importance of understanding the implications of the latest safety report on AI. The report, spearheaded by a group of 96 experts and endorsed by renowned figures like Jeffrey Hinton, sheds light on the potential misuse of general-purpose AI systems for malicious intents.
One of the highlighted risks is the development of AI models capable of generating hazardous substances beyond the expertise of human experts. While these advancements have potential benefits in medicine, there is also a concern about their misuse.
Although AI systems have become more adept at identifying software vulnerabilities independently, the report emphasizes the need for caution in the face of escalating cyber threats orchestrated by hackers.
Additionally, the report discusses the risks associated with AI technologies like Deep Fake, which can be exploited for fraudulent activities, including financial scams, misinformation, and creating explicit content.
Furthermore, the report flags the vulnerability of closed-source AI models to security breaches, highlighting the potential for malicious use if not regulated effectively.
In light of recent advancements like the O3 model by OPENAI, Bengio underscores the need for a thorough risk assessment to comprehend the evolving landscape of AI capabilities and associated risks.
While AI innovations hold promise for transforming various industries, there is a looming concern about their potential misuse, particularly by malicious actors seeking to exploit autonomous AI for nefarious purposes.
It is essential to address these risks proactively to mitigate the threats posed by AI developments and ensure that the technology is harnessed for beneficial purposes.
As society navigates the uncertainties surrounding AI advancements, there is a collective responsibility to shape the future trajectory of this transformative technology.
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.