Transatlantic Social Media Clash: Impact of UK Online Safety Laws on Internet Safety

The UK’s new online safety laws are generating considerable attention. As worries intensify about the accessibility of harmful online content, regulations have been instituted to hold social media platforms accountable.

However, just days after their implementation, novel strategies for ensuring children’s safety online have sparked discussions in both the UK and the US.

Recently, Nigel Farage, leader of the Populist Reformed British Party, found himself in a heated exchange with the government’s Minister of Labour after announcing his intent to repeal the law.

In parallel, Republicans convened with British lawmakers and the communications regulator Ofcom. The ramifications of the new law are also keenly observed in Australia, where plans are afoot to prohibit social media usage for those under 16.

Experts note that the law embodies a tension between swiftly eliminating harmful content and preserving freedom of speech.

Senior Reformer Zia Yusuf stated:

Responding to criticisms of UK legislation, technical secretary Peter Kyle remarked, “If individuals like Jimmy Saville were alive today, they would still commit crimes online, and Nigel Farage claims to be on their side.”

Kyle referred to measures in the law that would help shield children from grooming via messaging apps. Farage condemned the technical secretary’s comments as “unpleasant” and demanded an apology, which is unlikely to be forthcoming.

“It’s below the belt to suggest they’ll do anything to assist individuals like Jimmy Saville while causing harm,” Farage added.

The UK’s rights are not the only concerns raised about the law. US Vice President JD Vance claimed that freedom of speech in the UK is “retreating.” Last week, Republican Rep. Jim Jordan, who criticized the legislation, led a group of US lawmakers in discussions with Kyle and Ofcom regarding the law.

Jordan labeled the law as “UK online censorship legislation” and criticized Ofcom for imposing regulations that “target” and “harass” American companies. A bipartisan delegation also visited Brussels to explore the Digital Services Act, the EU’s counterpart to the online safety law.

Scott Fitzgerald, a Republican member of the delegation, noted the White House would be keen to hear the group’s findings.

Worries from the Trump administration have even led to threats against OFCOM and EU personnel concerning visa restrictions. In May, the State Department announced it would block entry to the US for “foreigners censoring Americans.” Ofcom has expressed a desire for “clarity” regarding planned visa restrictions.

The intersection of free speech concerns with economic interests is notable. Major tech platforms including Google, YouTube, Facebook, Instagram, WhatsApp, Snapchat, and X are all based in the US and may face fines of up to £18 million or 10% of global revenue for violations. For Meta, the parent company of Instagram, Facebook, and WhatsApp, this could result in fines reaching $16 billion (£11 billion).

On Friday, X, the social media platform owned by self-proclaimed free speech advocate Elon Musk, issued a statement opposing the law, warning that it could “seriously infringe” on free speech.

Signs of public backlash are evident in the UK. A petition calling for the law’s repeal has garnered over 480,000 signatures, making it eligible for consideration in Congress, and was shared on social media by far-right activist Tommy Robinson.

Tim Bale, a political professor at Queen Mary University in London, is skeptical about the law being a major voting issue.

“No petition or protest has significant traction for most people. While this resonates strongly with those online—on both the right and left—it won’t sway a large portion of the general populace,” he said.

According to a recent Ipsos Mori poll, three out of four UK parents are worried about their children’s online activities.

Beavan Kidron, a British fellow and prominent advocate for online child safety, shared with the Guardian that he is “more than willing to engage Nigel Farage and his colleagues on this issue.”

Skip past newsletter promotions

“If companies focus on targeting algorithms toward children, why would reforms place them in the hands of Big Tech?”

The UK’s new Under-18 guidelines, which prompted the latest legislation, mandate age verification on adult sites to prevent underage access. However, there are also measures to protect children from content that endorses suicide, self-harm, and eating disorders, as well as curtail the circulation of materials that incite hatred or promote harmful substances and dangerous challenges.

Some content falls within age appropriateness to avoid being flagged as violating these regulations. In an article by the Daily Telegraph, Farage alleged that footage of anti-immigrant protests was not only “censored” but also related to the Rotherham Grooming Gang scandal.

These instances were observed on X, which flagged a speech by Conservative MP Katie Lamb regarding the UK’s child grooming scandal. The content was labeled with a notice stating, “local laws temporarily restrict access to this content until X verifies the user’s age.” The Guardian could not access the Age Verification Service on X, suggesting that, until age checks are fully operational, the platform defaults many users to a child-friendly experience.

X was contacted for commentary regarding age checks.

On Reddit, the Alcohol Abuse Forum and the Pet Care subforum will implement age checks before granting access. A Reddit spokesperson confirmed that this age check is enforced under the online safety law to limit content that is illegal or harmful to users under the age of 18.

Big Brother Watch, an organization focused on civil liberties and privacy, noted that examples from Reddit and X exemplify the overreach of new legislation.

An Ofcom representative stated that the law aims to protect children from harmful and criminal content while simultaneously safeguarding free speech. “There is no necessity to limit legal content accessible to adult users.”

Mark Jones, a partner at London-based law firm Payne Hicks Beach, cautioned that social media platforms might overly censor legitimate content due to compliance concerns, jeopardizing their obligations to remove illegal material or content detrimental to children.

He added that the regulations surrounding Ofcom’s content handling are likely to manifest as actionable and enforceable due to the pressure to quickly address harmful content while respecting freedom of speech principles.

“To effectively curb the spread of harmful or illegal content, decisions must be made promptly; however, the urgency can lead to incorrect choices. Such is the reality we face.

The latest initiatives from the online safety law are only the beginning.

Source: www.theguardian.com

Astronomers witness the split of dark and regular matter in the clash of two galaxy clusters

The two galaxy clusters, known as MACS J0018.5+1626, contain thousands of galaxies each and are located billions of light-years away from Earth. As the clusters hurtled towards each other, dark matter traveled faster than normal matter.

This artist's conceptual illustration shows what happened when two massive clusters of galaxies, collectively known as MACS J0018.5+1626, collided. The dark matter (blue) in the clusters moves ahead of the associated hot gas clouds, or regular matter (orange). Both dark matter and regular matter feel the pull of gravity, but only the regular matter experiences additional effects like shocks and turbulence that slow it down during the collision. Image courtesy of W. M. Keck Observatory/Adam Makarenko.

Galaxy cluster mergers are a rich source of information for testing the astrophysics and cosmology of galaxy clusters.

However, the coalescence of clusters produces complex projection signals that are difficult to physically interpret from individual observation probes.

“Imagine a series of sand-carrying dump trucks colliding, and the dark matter would fly forward like sand,” says astronomer Emily Silich of the California Institute of Technology and the Harvard-Smithsonian Center for Astrophysics.

This separation of dark matter and normal matter has been observed before, most famously in the Bullet Cluster.

In this collision, hot gas can be clearly seen lagging behind dark matter after the two galaxy clusters push through each other.

The situation that occurred in MACS J0018.5+1626 is similar, but the direction of the merger is rotated about 90 degrees relative to the direction of the Bullet Cluster.

In other words, one of the giant galaxy clusters in MACS J0018.5+1626 is flying almost straight towards Earth, while the other is moving away.

This orientation gave the researchers a unique perspective to map the speeds of both dark and normal matter for the first time, and unravel how they separate during galaxy cluster collisions.

“Bullet Cluster makes you feel like you're sitting in the stands watching a car race, taking beautiful snapshots of cars moving from left to right on a straight stretch of road,” said Jack Sayers, a professor at the California Institute of Technology.

“For us, it's like standing in front of an oncoming car on a straight stretch of road with a radar gun and measuring its speed.”

To measure the velocity of ordinary matter, or gas, in galaxy clusters, the astronomers used an observational technique known as the kinetic Sunyaev-Zel'dovich (SZ) effect.

In 2013, they made the first observational detection of the kinetic SZ effect on an individual cosmic object, a galaxy cluster named MACS J0717.

The kinetic SZ effect occurs when photons from the early universe, or the cosmic microwave background radiation (CMB), are scattered by electrons in hot gas on their way to Earth.

Photons undergo a shift called the Doppler shift due to the movement of electrons in the gas cloud along the line of sight.

By measuring the change in brightness of the CMB due to this shift, astronomers can determine the speed of the gas clouds within the cluster.

By 2019, the study authors had made these motional SZ measurements in several galaxy clusters to determine the velocity of the gas, or ordinary matter.

They also measured the speed of galaxies within the cluster, which gave them an indirect idea of ​​the speed of dark matter.

However, at this stage of the study, our understanding of the cluster orientation was limited.

All they knew was that one of them, MACS J0018.5+1626, was showing signs of something strange going on: hot gas, or regular matter, moving in the opposite direction to dark matter.

“We saw a totally strange phenomenon where the velocities were in opposite directions, which initially made us think there might be a problem with the data,” Prof Sayers said.

“Even our colleagues simulating galaxy clusters had no idea what was going on.”

Scientists then used data from NASA's Chandra X-ray Observatory to determine the temperature and location of the gas in the cluster, as well as the extent to which it is being bombarded.

“These cluster collisions are the most energetic events since the Big Bang,” Šilić said.

“Chandra will measure the extreme temperatures of the gas, which will tell us the age of the merger and how recently the galaxy cluster collision took place.”

The authors found that before the collision, the clusters were moving towards each other at about 3,000 kilometers per second, roughly 1 percent of the speed of light.

With a more complete picture of what's going on, they were able to work out why dark matter and normal matter appear to be moving in opposite directions.

They say it's hard to visualize, but the direction of the collision, combined with the fact that dark matter and normal matter separated from each other, explains the strange speed measurements.

It is hoped that more studies like this one will be conducted in the future, providing new clues about the mysterious properties of dark matter.

“This work is a starting point for more detailed studies into the nature of dark matter,” Šilić said.

“We now have a new type of direct probe that shows us how dark matter behaves differently from ordinary matter.”

of Investigation result Published in Astrophysical Journal.

_____

Emily M. Silich others. 2024. ICM-SHOX. I. Methodology overview and discovery of gas-dark matter velocity separation in the MACS J0018.5+1626 merger. ApJ 968, 74; doi: 10.3847/1538-4357/ad3fb5

This article is a version of a press release provided by Caltech.

Source: www.sci.news

U.S. states and big tech companies clash over online child safety bills: Battle lines drawn

On April 6, Maryland passed the first “Kids Code” bill in the US. The bill is designed to protect children from predatory data collection and harmful design features by tech companies. Vermont’s final public hearing on the Kids Code bill took place on April 11th. This bill is part of a series of proposals to address the lack of federal regulations protecting minors online, making state legislatures a battleground. Some Silicon Valley tech companies are concerned that these restrictions could impact business and free speech.

These measures, known as the Age-Appropriate Design Code or Kids Code bill, require enhanced data protection for underage online users and a complete ban on social media for certain age groups. The bill unanimously passed both the Maryland House and Senate.

Nine states, including Maryland, Vermont, Minnesota, Hawaii, Illinois, South Carolina, New Mexico, and Nevada, have introduced bills to improve online safety for children. Minnesota’s bill advanced through a House committee in February.

During public hearings, lawmakers in various states accused tech company lobbyists of deception. Maryland’s bill faced opposition from tech companies who spent $250,000 lobbying against it without success.

Carl Szabo, from the tech industry group NetChoice, testified before the Maryland state Senate as a concerned parent. Lawmakers questioned his ties to the industry during the hearing.

Tech giants have been lobbying in multiple states to pass online safety laws. In Maryland, these companies spent over $243,000 in lobbying fees in 2023. Google, Amazon, and Apple were among the top spenders according to state disclosures.

The bill mandates tech companies to implement measures safeguarding children’s online experiences and assess the privacy implications of their data practices. Companies must also provide clear privacy settings and tools to help children and parents navigate online privacy rights and concerns.

Critics are concerned that the methods used by tech companies to determine children’s ages could lead to privacy violations.

Supporters argue that social media companies should not require identification uploads from users who already have their age information. NetChoice suggests digital literacy education and safety measures as alternatives.

During a discussion on child safety legislation, a NetChoice director emphasized parental control over regulation, citing low adoption rates of parental monitoring tools on platforms like Snapchat and Discord.

NetChoice has proposed bipartisan legislation to enhance child safety online, emphasizing police resources for combating child exploitation. Critics argue that tech companies should be more proactive in ensuring child safety instead of relying solely on parents and children.

Opposition from tech companies has been significant in all state bills, with representatives accused of hiding their affiliations during public hearings on child safety legislation.

State bills are being revised based on lessons learned from California, where similar legislation faced legal challenges and opposition from companies like NetChoice. While some tech companies emphasize parental control and education, critics argue for more accountability from these companies in ensuring child safety online.

Recent scrutiny of Meta products for their negative impact on children’s well-being has raised concerns about the company’s role in online safety. Some industry experts believe that tech companies like Meta should be more transparent and proactive in protecting children online.

Source: www.theguardian.com

US Legislators Clash Over Strategies to Enhance Online Child Safety | Technology

SAs historic legislation obtained enough votes to pass in the U.S. Senate, divisions among online child safety advocates have emerged. Some former opponents of the bill have been swayed by amendments and now lend their support. However, its staunchest critics are demanding further changes.

The Kids Online Safety Act (Kosa), introduced over two years ago, garnered 60 supporters in the Senate by mid-February. Despite this, numerous human rights groups continue to vehemently oppose the bill, highlighting the ongoing discord among experts, legislators, and activists over how to ensure the safety of young people in the digital realm.


“The Kids Online Safety Act presents our best chance to tackle the harmful business model of social media, which has resulted in the loss of far too many young lives and contributed to a mental health crisis,” stated Josh Golin, executive director of Fair, a children’s online safety organization.

Critics argue that the amendments made to the bill do not sufficiently address their concerns. Aliya Bhatia, a policy analyst at the Center for Democratic Technology, expressed, “A one-size-fits-all approach to child safety is insufficient in protecting children. This bill operates on the assumption of a consensus regarding harmful content types and designs, which does not exist. Such a belief hampers the ability of young people to freely engage online, impeding their access to the necessary communities.”

What is the Kids Online Safety Act?

The Xhosa bill, spearheaded by Connecticut Democrat Richard Blumenthal and Tennessee Republican Marsha Blackburn, represents a monumental shift in U.S. tech legislation. The bill mandates platforms like Instagram and TikTok to mitigate online risks through alterations to their designs and the ability to opt out of algorithm-based recommendations. Enforcement would necessitate more profound changes to social networks compared to current regulations.

Initially introduced in 2022, the bill elicited an open letter signed by over 90 human rights organizations vehemently opposing it. The coalition argued that the bill could enable conservative state attorneys general, who determine harmful content, to restrict online resources and information concerning LGBTQ+ youth and individuals seeking reproductive health care. They cautioned that the bill could potentially be exploited for censorship.

Source: www.theguardian.com

Twitter War Erupts as Bot Armies Clash over China’s Balloon Incident

Chinese reconnaissance balloon flies over Canada and the United States just before being shot down over South Carolina in 2023

Joe Granita/ZUMA Presswire/Shutterstock

When Chinese reconnaissance balloons flew over the United States and Canada last year, tens of thousands of bots competed to shape the discussion on Twitter, according to an analysis of social media posts.

Kathleen Curley and Lynette Hui Xiang Ng A research team from Carnegie Mellon University in Pennsylvania tracked approximately 1.2 million tweets posted by more than 120,000 users on Twitter (later renamed to X) between January 31, 2023 and February 22, 2023. did. All tweets included the hashtags #chineseballoon and #weatherballoon and discussed the controversial airborne. The US claimed that China used it for espionage.

Tweets were then located using Twitter's location feature and checked with an algorithm called . bot hunterlook for signs that the account is not managed by a human.

“There are many things going on. [identifying a bot] Examples are whether messages are being sent so fast that a human could literally not type them that fast, or if someone is geotagged for one minute in London, but in New Zealand a person can physically type For example, if it is not possible. Please do that,” says Carly.

Researchers found that about 35 percent of geotagged users in the U.S. exhibited bot-like behavior, while 65 percent appeared to be human. In China, the proportions were reversed, with 64 percent bots and 36 percent humans.

Of the accounts claiming to be from neither country, 42 percent were bots and 58 percent were human. Although reliable numbers are difficult to come by, previous research suggests that 10-20% of Twitter users are bots. The bot autonomously performs tasks such as sending her Twitter messages to users on the platform and “liking” other posts.often used to try influence public opinion.

“We're seeing more bot activity in tweets that appear to be coming from Chinese communities than we are seeing coming from American communities,” Curley said. The overall rate of bot accounts was also higher during the Chinese spy balloon debate than at other events, researchers said.

As an example, a bot based in China posted: “#USA #China #February 14th” […] Some may speculate that the US is using the #China spyballoon “excuse” to escalate tensions with #Beijing. Remember, US airspace is highly controlled and we have more accurate satellite technology for spying. ”

Neither Carley nor Ng would speculate on who might be behind the bot. stephen buckley Researchers at City University of London said: “There is likely to be a mix of state and individual actors seeking to sway and manipulate public opinion regarding breaking news.”

As for whether bot activity has made a difference, Curley said: “The fact that bots speak a little differently to humans means what people were reading will look a little different and future conversations will look different.” Buckley says it's important to be “very wary” of content you post and assume it wasn't posted by a human.

topic:

Source: www.newscientist.com