Chinese tech company promises to combat online hate speech following knife attack

Chinese internet companies have made a commitment to combat “extreme nationalism” online, specifically targeting anti-Japanese sentiment. This decision comes after a tragic incident in Suzhou, where a Chinese woman lost her life while trying to protect a Japanese mother and child.

The leading companies Tencent and NetEase have stated that they will actively investigate and ban users who promote hatred and incite conflict.

A spokesperson for Tencent, the operator of messaging app WeChat, mentioned that the incident in Jiangsu province has garnered significant public attention, with some internet users fueling tensions between China and Japan, leading to a surge in extreme nationalism.

Following the arrest of an unemployed man for the stabbing incident, which resulted in the death of the Chinese woman who intervened, there has been a mix of reactions online ranging from celebrating heroism to expressing nationalistic sentiments.

Social media platforms like Weibo and Douyin have highlighted the presence of extreme nationalistic and xenophobic content and are actively working to address these issues. This move marks a significant shift as such sentiments have been prevalent on China’s internet with minimal intervention.

In the wake of the Suzhou tragedy, online users have drawn parallels between xenophobic content online and real-world violence, emphasizing the need for regulation to prevent further incidents. Internet companies have reported removing a substantial amount of illegal content and taking action against violating posts.

Despite the efforts by internet companies, some individuals have criticized the crackdown on anti-Japan content, revealing differing perspectives within the online community. Chinese authorities have labeled the knife attack as an isolated event, in contrast to previous incidents involving foreigners.

Further research by Lin Zhihui

Source: www.theguardian.com

The return of GamerGate’s troubling online misogyny: Was it ever truly gone? | Gaming

a A few months ago, I wrote about the consulting firm Sweet Baby Inc., which was at the center of a conspiracy theory: disgruntled gamers on the Steam forums wrongly concluded that the small company was somehow mandating that its games include more diverse characters. The sad but predictable result was a massive amount of targeted harassment against the people who worked at Sweet Baby and all journalists (especially women) who wrote about the company. It was a disturbing echo of Gamergate, the online harassment campaign from a decade ago that initially began with the extreme accusations of a vengeful ex-boyfriend of a game developer.

The lingo has changed a bit over the past decade. Whereas before we were pissed off at “SJWs,” or social justice warriors, now we quibble over a different acronym: DEI (Diversity, Equality, and Inclusion), or just good old fashioned “woke.” But the sentiment of this group is the same: games are for us, and only us, and if you want games to change, or tell stories other than the simplistic male-centric power fantasies we grew up with, well, that’s not going to be allowed. We won’t tolerate it. In fact, we’ll actively harass you to try and kick you out of this space altogether.

Unfortunately, the anti-woke “campaign” has shown little to no let up in the months since. Led by the usual crew of charlatans, they have covered issues such as, in no particular order, the fact that Aphrodite, the goddess of love, isn’t attractive enough in Supergiant’s Hades II, the fact that all the female characters in recent game trailers have “square” jaws and “masculine” body types, and the fact that a journalist gave the recent PS5 game Stellar Blade (pictured below) a bad review because the female characters were not attractive. Too Hot (Note: It’s not, the game has
Metacritic Score
81) There are way too many games featuring the “DEI haircut” (fun to interpret) And Ubisoft for some reason
The dark forces of Awakening are trying to make the protagonist of the upcoming Assassin’s Creed game (pictured above) a black samurai.
Historical evidence
This last claim was backed up by the king of nasty posters himself, Elon Musk, who responded to a tweet about this manufactured outrage with, “DEI kills the arts.”

Assassin’s Creed: Shadows executive producer Marc-Alexis Côté spoke about Musk’s tweet in an interview.
Stephen Totilo from Game Files
Last week, Elon Musk tweeted, “Elon Musk is just stoking hatred. A bunch of 3-word replies came to mind. First thing I wanted to do was go back to the X I deleted and just tweet it back… What Elon is talking about is not the game we’re making. People need to play the game for themselves. And if you don’t agree with what we’re doing within the first 11 minutes and 47 seconds, debate.” Incidentally, the game’s depiction of Yasuke, a black samurai, has ample historical basis.

Shortly after the end of Summer Game Fest, anti-woke gamers found a new target.
IGN Report
has credibly and comprehensively uncovered the history of sexism at the development of Black Myth: Wukong, the upcoming Planet of the Apes-meets-Sekiro action game. Amazingly, the response has been to attack the woman who wrote the game and spark ridiculous conspiracy theories about IGN blackmailing the developer. You could immerse yourself in the astounding nastiness of any one of these manufactured controversies, but in my opinion, it’s just not worth it.

Stellar Blade. Photo: Public Relations

This reactionary underbelly of gaming enthusiast media, mostly based on X and YouTube, doesn’t actually have the slightest influence on how games are made, or what games are made. Look at GamerGate. What has it actually accomplished? There is more diversity in games than there was 10 years ago, not less. In the flurry of trailers and demos at this year’s Summer Game Fest, I saw more non-white male faces and characters than I’ve ever seen in the nearly 20 years I’ve covered games. But they can still make people’s online lives hell for a while. I know that much, because I’ve been there many times.

When Gamergate began, I was running Kotaku’s UK branch, so I had a front row seat to their harassment tactics, which included sending the most nasty threats imaginable through every online channel available to them, emailing game publishers and my bosses with a record of my professional misconduct and journalistic failures (i.e. writing about video games from a feminist perspective) in an attempt to get me fired, trying to find my and my colleagues’ real addresses, phone numbers and family (and, once found, posting the details on their subreddits), and creating insane Google docs that drew connections between “SJW” journalists and developers. One of these insane documents featured briefly in a recent Netflix documentary about 4chan, with a couple of friends texting me screenshots and asking if I knew I was some old “alt-right” conspiracy figure. Unfortunately, I did.

It’s happened several times since then, for a variety of reasons. Dealing with online mobs is unfortunately part of the job for many journalists these days, and for game developers too. As a woman covering video games, I’ve dealt with a variety of harassment over the years, and still wish I didn’t have to write about politics. But I know how awful it feels when they rally against you, especially if it’s the first time. They search Google Images for the most unflattering image of you, use it as a cutout for a YouTube thumbnail image, and rant for 10 minutes over screenshots of your article. They tweet big names in the games industry to get them to publicly discredit you. They turn their followers on you. You can’t help but respond to their manufactured anger with your own authentic anger.

It’s tempting to attack these people endlessly, but anger breeds anger, especially now that you can literally make money posting inflammatory nonsense on X or YouTube. If GamerGate has proven anything, it’s that you don’t have to pander to or listen to toxic gamers who stoke your anger. That said, I don’t think there has been enough public backlash against this online harassment over the past few months, even as major publishers in the gaming industry have been caught in an online storm over the consultancies they work with, the journalists and pundits who cover them, and even their own developers. Take my word for it, a voice goes a long way.

What to Play

This thing on an oil rig in the North Sea…still waking up the deep sea. Photo: Incognito mode

Chinese Room, whose previous game, Everybody’s Gone to the Rapture, became a creepy British classic, is taking a more horror-thriller direction with their latest game. Awaken the Abyss (Pictured above.) It’s basically The Thing, but set on a creaky, dark, dank rig in the desolate waters of the North Sea, where a band of workers encounters something much worse while drilling for oil.

I’ve played the first few hours and the attention to detail in depicting life on a rig in 1970s Scotland is exceptional, right down to the faded tartan carpets and lived-in feel of the crew’s dormitories (one guy has National Front leaflets pinned to the wall), and I also love the delightfully authentic Scottish dialogue. that It’s scary, which to me is its advantage, but it’s atmospheric and incredibly well-made, so you really feel like you’re there – it’s worth playing just for that feeling of being there.

Available on: PC, Xbox Series S/X, PlayStation 5
Estimated play time:
Six hours

Skip Newsletter Promotions

Privacy Notice: The newsletter may contain information about charities, online advertising and externally funded content. For more information, please see our Privacy Policy. We use Google reCaptcha to protect your privacy on our website and Google. privacy policy and terms of service Apply.

What to Read

HiFi Rush by Tango Gameworks. Photo: Tango Gameworks
  • Developer Tango Game Works Takeo Kido (the creator of Hi-Fi Rush pictured above)
    Very sad photo From the studio’s final day. It was acquired by Microsoft in March 2021 and closed down.


  • Really interesting long article

    Kotaku’s Kenneth Shepherd talks about the ongoing debate over how to portray it Romance in Video Games: Should characters be “playersexual” and do what the player wants? Or does depicting the queer experience in particular lead to two-dimensional characterization? There’s a lot more that could be said on this topic, but this article is pretty comprehensive, so be sure to read it.

  • Yesterday’s big news Nintendo Direct It was the announcement of a new Zelda title that would actually let you play as Princess Zelda for the first time (no doubt to the delight of those aforementioned online crowds).
    Also announced There was the release of the Marvel vs. Capcom bundle, Mario Party Jamboree, the Romancing SaGa 2 remake, and Metroid Prime 4: Beyond, which is due for release in 2025.

What to click on

Question Block

Subnautica: Sub-Zero. Photo: Unknown World

Today’s question comes from reader Diana.

“When making a game, to what extent should developers listen to player feedback? People who paid for access to the pre-alpha version on Kickstarter can give their feedback. Should their feedback fundamentally change the game, or should it just improve the game as the developer intended?”

From what I’ve heard from developers working on Kickstarter and Early Access projects, where players are welcomed into the game long before it’s actually finished, their input is absolutely essential – as long as it’s in good faith. Developers can learn so much by seeing how people actually play – whether that’s finding out where people get stuck and smoothing out the difficulty curve, seeing which elements and ideas players respond most favorably to, or balancing online multiplayer gameplay. Sometimes, players just don’t get the idea.
do It changes the game, and usually for the better. Games like Kerbal Space Program, Subnautica (pictured above), and even Baldur’s Gate 3 have benefited greatly from releasing in Early Access.

But should developers change their games so much for the players that they compromise their original creative vision? Only if that vision doesn’t work in reality. Especially in games, where players never Really You won’t know if things are going well until quite late in development. Generally, if the developer is smart, the game is pretty finished by the time it enters Early Access or public alpha/beta testing. At that point, player data and feedback become an opportunity for the developer to better realize their vision.

If you have a question for Question Block, or anything else you’d like to say about the newsletter, please click “Reply” or email us at pushingbuttons@theguardian.com.

Source: www.theguardian.com

X tries to conceal footage of Sydney church stabbing as American users share video online

Social media platform X claims to have followed an Australian Federal Court order to take down footage of the Wakeley church stabbing. However, the footage was still accessible to Australian users as it was posted right below the compliance announcement.

X stated that it complied with the law by “restricting” some posts for Australian users. They argue that the post should not have been banned in Australia and that the government shouldn’t have the power to censor content from users in other countries.

Last week, eSafety commissioners requested X to remove footage of an attack on Bishop Mar-Marie Emmanuel due to its graphic nature.


A federal court on Monday ordered X, previously known as Twitter, to hide posts with video of the Sydney church stabbing from global users. The Australian Federal Police raised concerns in court about the potential use of the video to incite terrorism.

Regulators asked X to remove 65 separate tweets containing videos of the attack.

X’s lawyers argued in court that they had already geo-blocked the posts in Australia, but the eSafety Commissioner insisted this was not sufficient.

Many tweets could still be accessed outside Australia or through VPNs within the country.

The court extended the injunction on Wednesday, ordering the posts to be hidden until May 10, 2024, pending further legal proceedings.

Late on Thursday, X’s Global Government Affairs account stated, “We feel we are complying with the eSafety notice and Australian law by restricting all relevant posts in Australia.” They also posted a statement.

However, a verified user, X, based in New Hampshire, USA, posted footage of the attack in response to X’s statement, which was visible to Australian users.

Skip past newsletter promotions

X stated on Thursday that they believe the content did not incite violence and should be considered part of public debate, arguing against global content removal demands.

The company opposes government authority to censor online content and believes in respecting each country’s laws within its jurisdiction.

The eSafety Commissioner emphasized the need to minimize harm caused by harmful content online, despite the challenges of completely eradicating it.

Posts including the video in question became inaccessible to some users after inquiries from Guardian Australia.

Federal opposition leader Peter Dutton supported X and Elon Musk, stating that Australia should not act as the internet police and federal law should not dictate global content removal.

X has yet to comment on the situation.

Source: www.theguardian.com

Exploring Online Stalking and Voyeurism: The Women of Manchester | Crime

Maddie Lane and Phoebe Colin were unaware of the cameras recording them as they strolled down a bustling street in Manchester last April.

On a warm spring day, the women sported brightly colored cycling shorts, completely oblivious to being surreptitiously filmed by a person with a device placed below waist level.

Colin expressed her discomfort upon watching the video, stating, “I don’t like it. You can see them zooming in on our butt cheeks.”

The perpetrator boldly shot a high-definition video, capturing them primarily from behind just a few meters away, before moving around to capture their faces, which were unmistakably visible.

Feeling violated, Lane mentioned, “I had no idea they were filming us. We were just wondering, ‘Why didn’t we notice them?’”

What intensified their fear was how they discovered the existence of the video. Lane received an Instagram message from an anonymous sender containing a link to the footage, insinuating, “Hi, is this you?” They proceeded to track down Colin and send her a similar message revealing the video’s online presence.

Lane shared their apprehension, saying, “The fact that they found us on social media was frightening. We still don’t know who they are.”

Despite reporting the incident to the police, there were no consequences. Colin recollected, “They informed us that there was nothing they could do and advised us to reach out if it happened again.”

The perpetrator appears to have filmed the two individuals in this video from close proximity. Photo: Joel Goodman/Guardian

This video is one among several targeting women in tight attire or short dresses, captured without their awareness in various UK towns and cities.

In response to the escalating issue, authorities urged women to report such incidents, emphasizing they would take a firm stand provided genuine victim or community reports were received.

Recent legislation has equipped the police with enhanced powers to seek stalking protection orders (SPOs) against offenders, aimed at curbing stalking behavior early on by prohibiting certain actions such as capturing images of perpetrators.

The changes announced by the Home Office on the first day of National Stalking Awareness Week enable police to apply for victim protection orders based on civil standards, simplifying the process by eliminating the need for conclusive criminal evidence.

The unsettling experience of Lane and Colin resonates with many women venturing out in Manchester on weekend nights, with similar incidents being common.

At popular venues like Printworks, incidents of secret video recordings have been reported, highlighting the urgent need for action and awareness.

The women at Deansgate, where numerous such videos circulated on social media, expressed concern over the pervasive issue of privacy invasion and objectification.

By sharing their thoughts and experiences, these women emphasized the importance of social change and actively confronting such reprehensible behavior.

Source: www.theguardian.com

Charity warns that UK children are facing a relentless onslaught of gambling advertisements and images online

New research has discovered that despite restrictions on advertising campaigns targeting young people, children are being inundated with gambling promotions and content that resembles gambling while browsing the internet.

The study, commissioned by charity GambleAware and funded by donations from gambling companies, highlights the blurred line between gambling advertising and online casino-style games, leading to a rise in online gambling with children unaware of the associated risks. It warns that gambling advertisements featuring cartoon graphics can strongly attract children. Recently, a gambling company promoted a new online slot game on social media using a cartoon of three frogs to entice players.

GambleAware is recommending new regulations to limit the exposure of young people to advertising. Research conducted by the charity revealed that children struggle to differentiate between actual gambling products and gambling-like content, such as mobile games with in-app purchases.

Zoe Osmond, CEO of GambleAware, emphasized the need for immediate action to protect children from being exposed to gambling ads and content, stating, “This research demonstrates that gambling content has become a part of many children’s lives.”

GambleAware chief executive Zoe Osmond said urgent action on internet promotions was needed to protect children. Photo: Doug Peters/Pennsylvania

The report also points out that excessive engagement in online games with gambling elements, like loot boxes bought with virtual or real money, can fall under a broader definition of gambling. It calls for stricter regulation on platforms offering such games to children.

Businesses are cautioned against using cartoon characters in gambling promotions, as they may appeal to children. However, there is no outright ban on using such characters. Online casino 32Red, for instance, recently advertised its Fat Frog online slot game on social media with a cartoon frog theme.

Dr. Raffaello Rossi, a marketing lecturer focused on the impact of gambling advertising on youth, criticized regulators for not acting swiftly enough to address the proliferation of online promotions enticing children. He called for new advertising codes to regulate social media promotions effectively.

Skip past newsletter promotions

The Gambling and Gambling Council assured that their members strictly verify ages for all products and have implemented new age restriction rules for social media advertising.

Recent data from the Gambling Commission indicates that young people are now less exposed to gambling ads compared to previous years. While no direct link between problem gambling development and advertising has been established.

The Advertising Standards Authority (ASA) stated that it regulates gambling advertising to safeguard children and monitors online gambling ads through various tools and methods.

The Department for Culture, Media and Sport affirmed its focus on monitoring new forms of gambling and gambling-like products, including social casino games, to ensure appropriate regulations are in place.

Kindred Group, the owner of the 32Red brand, was reached out to for comment.

Source: www.theguardian.com

Report: Increase in online presence of AI-generated images depicting child sexual abuse | Technology

Child sexual exploitation is increasing online, with artificial intelligence generating new forms such as images and videos related to child sexual abuse.


Reports of online child abuse to NCMEC increased by more than 12% from the previous year to over 36.2 million in 2023, as announced in the organization’s annual CyberTipline report. Most reports were related to the distribution of child sexual abuse material (CSAM), including photos and videos. Online criminals are also enticing children to send nude images and videos for financial gain, with increased reports of blackmail and extortion.

NCMEC has reported instances where children and families have been targeted for financial gain through blackmail using AI-generated CSAM.

The center has received 4,700 reports of child sexual exploitation images and videos created by generative AI, although tracking in this category only began in 2023, according to a spokesperson.

NCMEC is alarmed by the growing trend of malicious actors using artificial intelligence to produce deepfaked sexually explicit images and videos based on real children’s photos, stating that it is devastating for the victims and their families.

The group emphasizes that AI-generated child abuse content hinders the identification of actual child victims and is illegal in the United States, where production of such material is a federal crime.

In 2023, CyberTipline received over 35.9 million reports of suspected CSAM incidents, with most uploads originating outside the US. There was also a significant rise in online solicitation reports and exploitation cases involving communication with children for sexual purposes or abduction.

Top platforms for cybertips included Facebook, Instagram, WhatsApp, Google, Snapchat, TikTok, and Twitter.

Skip past newsletter promotions

Out of 1,600 global companies registered for the CyberTip Reporting Program, 245 submitted reports to NCMEC, including US-based internet service providers required by law to report CSAM incidents to CyberTipline.

NCMEC highlights the importance of quality reports, as some automated reports may not be actionable without human involvement, potentially hindering law enforcement in detecting child abuse cases.

NCMEC’s report stresses the need for continued action by Congress and the tech community to address reporting issues.

Source: www.theguardian.com

U.S. states and big tech companies clash over online child safety bills: Battle lines drawn

On April 6, Maryland passed the first “Kids Code” bill in the US. The bill is designed to protect children from predatory data collection and harmful design features by tech companies. Vermont’s final public hearing on the Kids Code bill took place on April 11th. This bill is part of a series of proposals to address the lack of federal regulations protecting minors online, making state legislatures a battleground. Some Silicon Valley tech companies are concerned that these restrictions could impact business and free speech.

These measures, known as the Age-Appropriate Design Code or Kids Code bill, require enhanced data protection for underage online users and a complete ban on social media for certain age groups. The bill unanimously passed both the Maryland House and Senate.

Nine states, including Maryland, Vermont, Minnesota, Hawaii, Illinois, South Carolina, New Mexico, and Nevada, have introduced bills to improve online safety for children. Minnesota’s bill advanced through a House committee in February.

During public hearings, lawmakers in various states accused tech company lobbyists of deception. Maryland’s bill faced opposition from tech companies who spent $250,000 lobbying against it without success.

Carl Szabo, from the tech industry group NetChoice, testified before the Maryland state Senate as a concerned parent. Lawmakers questioned his ties to the industry during the hearing.

Tech giants have been lobbying in multiple states to pass online safety laws. In Maryland, these companies spent over $243,000 in lobbying fees in 2023. Google, Amazon, and Apple were among the top spenders according to state disclosures.

The bill mandates tech companies to implement measures safeguarding children’s online experiences and assess the privacy implications of their data practices. Companies must also provide clear privacy settings and tools to help children and parents navigate online privacy rights and concerns.

Critics are concerned that the methods used by tech companies to determine children’s ages could lead to privacy violations.

Supporters argue that social media companies should not require identification uploads from users who already have their age information. NetChoice suggests digital literacy education and safety measures as alternatives.

During a discussion on child safety legislation, a NetChoice director emphasized parental control over regulation, citing low adoption rates of parental monitoring tools on platforms like Snapchat and Discord.

NetChoice has proposed bipartisan legislation to enhance child safety online, emphasizing police resources for combating child exploitation. Critics argue that tech companies should be more proactive in ensuring child safety instead of relying solely on parents and children.

Opposition from tech companies has been significant in all state bills, with representatives accused of hiding their affiliations during public hearings on child safety legislation.

State bills are being revised based on lessons learned from California, where similar legislation faced legal challenges and opposition from companies like NetChoice. While some tech companies emphasize parental control and education, critics argue for more accountability from these companies in ensuring child safety online.

Recent scrutiny of Meta products for their negative impact on children’s well-being has raised concerns about the company’s role in online safety. Some industry experts believe that tech companies like Meta should be more transparent and proactive in protecting children online.

Source: www.theguardian.com

Supreme Court to Decide on Government’s Authority on Online Misinformation | Tech

The Supreme Court heard oral arguments on Monday in a case that may have significant implications for the federal government’s relationship with social media companies and online misinformation. The plaintiffs in Marcy v. Missouri claim that the White House’s request to remove false information about the coronavirus on Twitter and Facebook constitutes unlawful censorship in violation of the First Amendment.

The discussion began with Brian Fletcher, the Justice Department’s acting chief attorney general, arguing that the government’s actions do not cross the line from persuasion to coercion. He also disputed the lower court’s portrayal of events in the ruling, calling it misleading or containing quotes taken out of context.

“When the government convinces a private organization not to distribute or promote someone else’s speech, it is not censorship but rather persuading the private organization to act within its legal rights,” stated Fletcher.

The justices, particularly conservatives Samuel Alito and Clarence Thomas, pressed Fletcher on where the distinction lies between coercing and persuading a company. Fletcher defended the government’s actions as part of a broader effort to mitigate harm to the public.

Louisiana Attorney General Benjamin Aguignaga argued that the government was covertly pressuring platforms to censor speech, violating the First Amendment. The lawsuit, led by the attorneys general of Louisiana and Missouri, accused the government of infringing on constitutional rights.

Several justices, including liberals Elena Kagan and Sonia Sotomayor, also weighed in on the government’s efforts to address potential harm and the boundaries of the First Amendment. Sotomayor criticized the factual inaccuracies in the plaintiffs’ lawsuit.

Aguignaga apologized for any shortcomings in the brief and acknowledged that it may not have been as thorough as it should have been.

Source: www.theguardian.com

Ofcom concludes that exposure to violent online content is unavoidable for children in the UK

The UK children are now inevitably exposed to violent online content, with many first encountering it while still in primary school, according to a media watchdog report.

British children interviewed in the Ofcom investigation reported incidents ranging from videos of local school and street fights shared in group chats to explicit and extreme graphic violence, including gang-related content, being watched online.

Although children were aware of more extreme content existing on the web, they did not actively seek it out, the report concluded.

In response to the findings, the NSPCC criticized tech platforms for not fulfilling their duty of care to young users.

Rani Govender, a senior policy officer for online child safety, expressed concern that children are now unintentionally exposed to violent content as part of their online experiences, emphasizing the need for action to protect young people.

The study, focusing on families, children, and youth, is part of Ofcom’s preparations for enforcing the Online Safety Act, giving regulators powers to hold social networks accountable for failing to protect users, especially children.

Ofcom’s director of Online Safety Group, Gil Whitehead, emphasized that children should not consider harmful content like violence or self-harm promotion as an inevitable part of their online lives.

The report highlighted that children mentioned major tech companies like Snapchat, Instagram, and WhatsApp as platforms where they encounter violent content most frequently.

Experts raised concerns that exposure to violent content could desensitize children and normalize violence, potentially influencing their behavior offline.

Some social networks faced criticism for allowing graphic violence, with Twitter (now X) under fire for sharing disturbing content that went viral and spurred outrage.

While some platforms offer tools to help children avoid violent content, there are concerns about their effectiveness and children’s reluctance to report such content due to fear of repercussions.

Algorithmic timelines on platforms like TikTok and Instagram have also contributed to the proliferation of violent content, raising concerns about the impact on children’s mental health.

The Children’s Commissioner for England revealed alarming statistics about the waiting times for mental health support among children, highlighting the urgent need for action to protect young people online.

Snapchat emphasized its zero-tolerance policy towards violent content and assured its commitment to working with authorities to address such issues, while Meta declined to comment on the report.

Source: www.theguardian.com

US Legislators Clash Over Strategies to Enhance Online Child Safety | Technology

SAs historic legislation obtained enough votes to pass in the U.S. Senate, divisions among online child safety advocates have emerged. Some former opponents of the bill have been swayed by amendments and now lend their support. However, its staunchest critics are demanding further changes.

The Kids Online Safety Act (Kosa), introduced over two years ago, garnered 60 supporters in the Senate by mid-February. Despite this, numerous human rights groups continue to vehemently oppose the bill, highlighting the ongoing discord among experts, legislators, and activists over how to ensure the safety of young people in the digital realm.


“The Kids Online Safety Act presents our best chance to tackle the harmful business model of social media, which has resulted in the loss of far too many young lives and contributed to a mental health crisis,” stated Josh Golin, executive director of Fair, a children’s online safety organization.

Critics argue that the amendments made to the bill do not sufficiently address their concerns. Aliya Bhatia, a policy analyst at the Center for Democratic Technology, expressed, “A one-size-fits-all approach to child safety is insufficient in protecting children. This bill operates on the assumption of a consensus regarding harmful content types and designs, which does not exist. Such a belief hampers the ability of young people to freely engage online, impeding their access to the necessary communities.”

What is the Kids Online Safety Act?

The Xhosa bill, spearheaded by Connecticut Democrat Richard Blumenthal and Tennessee Republican Marsha Blackburn, represents a monumental shift in U.S. tech legislation. The bill mandates platforms like Instagram and TikTok to mitigate online risks through alterations to their designs and the ability to opt out of algorithm-based recommendations. Enforcement would necessitate more profound changes to social networks compared to current regulations.

Initially introduced in 2022, the bill elicited an open letter signed by over 90 human rights organizations vehemently opposing it. The coalition argued that the bill could enable conservative state attorneys general, who determine harmful content, to restrict online resources and information concerning LGBTQ+ youth and individuals seeking reproductive health care. They cautioned that the bill could potentially be exploited for censorship.

Source: www.theguardian.com

Warframe: A Safe Haven for My Son and Many Others in an Online World Full of Toxicity

SSix months ago, my son Zach started playing a video game that I knew little about, and as a games journalist, it was a little disconcerting. Warframe is an online science fiction shooter game created by Canadian-based developer Digital Extremes and first released in 2013. Although it’s hardly talked about outside of its fanbase, it has 75 million registered users and is consistently one of his biggest titles on Steam.

Set in a far-future solar system infested with hostile alien forces, players join the side of the Tenno, an ancient warlike race whose primary weapons are barely sentient cybernetic fighters (the warframes of the title). Zack spends hours each day flying between planets, completing missions and exploring while battling enemies such as the brutal clone army known as the Grineer and the giant, disease-ridden Infested. This sounds similar to Destiny, The Division, Final Fantasy XIV Online, and a dozen other so-called live service games that run indefinitely online, with new tasks, locations, and items added all the time. However, Warframe attracted his son’s attention. He has one important reason for that. It’s a very friendly and welcoming community.

Zach is on the autism spectrum and is now 18 years old, but he still finds it difficult to socialize in the real world. He’s loved games like Minecraft and his Fortnite for years, but as he’s gotten older, he’s gotten into darker, more mature stories and worlds. When I saw that he stumbled upon this epic gothic space opera, I was concerned that it would lead him to join gaming’s less bland communities: edgelords, griefers, and Call of Duty fans. I was worried that I would be in contact with aspiring professional gamers who could turn a shooting game like this into a game. A difficult place for vulnerable people.




More friendly shooting…Warframe. Photo: Digital Extremes

But in Warframe, the experience was different. The other players were immediately friendly, welcoming, and accommodating. What helped Zack from the beginning was the game’s well-maintained and very lively on-screen chat window. This allows players to ask questions and share tips and experiences without speaking. This is a huge advantage for neurodivergent players. In-game chat is not uncommon in live service games, but this place is mostly fine with proper moderation. Other players will do their best to help Zack, helping him find rare resources such as argon crystals, and escorting him to planets they have not yet unlocked. They also gave him weapons and items. He joined the Clan a few weeks ago and has made new friends throughout the US and Europe and hangs out together regularly.

According to Digital Extremes, they realized very early in development that building and maintaining a welcoming community was essential. “The community department was one of his first departments on the team,” says his creative director Rebecca Ford. She nods in recognition when I tell her how much people have helped my son. “[The in-game chat] is a place where you can say, “I have no idea what I’m doing” or “Does anyone have any advice for this build?” Warframe is a complex, cooperative, hard science fiction world. For us, that channel was essential.”




Rebecca Ford, Creative Director at Digital Extremes. Photo: Digital Extremes

Source: www.theguardian.com

Can Banning Smartphones and Social Media Help Protect Young People from Online Dangers?

The members of the WhatsApp group ‘Smartphone Free Childhood’ advocate for banning under-14s from owning smartphones and preventing under-16s from accessing social media to protect them from the dangers of the internet. However, believing this is the solution is unrealistic. Announcement (“Crazy: Thousands of UK parents join in quest for smartphone-free childhood”, February 17).

It is a parent’s responsibility to provide a safe environment for their children and teach them how to safely navigate the internet. Just like roads can be dangerous but we don’t ban cars, teaching children internet safety is crucial. Building open and honest relationships and setting boundaries at home will help young people understand internet dangers better than blanket bans. Making social media “adults only” may backfire and make it more tempting for children. They may also be less likely to seek help if they encounter inappropriate content.
stuart harrington
Burnham-on-Sea, Somerset

As seen in cases like Brianna Gee’s, giving children smartphones can have negative consequences. However, we should consider the benefits and drawbacks of smartphone access. I personally benefitted from having a smartphone in school for various tasks like using apps for transportation, news, and communication. While parental controls and monitoring are essential, smartphones have many positive uses. It is important to adapt to the changing online threats and promote more parental supervision.
oscar acton
Merton, County Durham

Do you have a photo you’d like to share with Guardian readers? Click here to upload it. Selected photos will be featured in our readers’ best photos gallery and in Saturday’s print edition.

Source: www.theguardian.com

EU initiates probe into TikTok concerning online content and child safety

The EU is launching an investigation into whether TikTok has violated online content regulations, particularly those relating to the safety of children.

The European Commission has officially initiated proceedings against a Chinese-owned short video platform for potential violations of the Digital Services Act (DSA).

The investigation is focusing on areas such as safeguarding minors, keeping records of advertising content, and determining if algorithms are leading users to harmful content.


Thierry Breton, EU Commissioner for the Internal Market, stated that child safety is the “primary enforcement priority” under the DSA. The investigation particularly focuses on age verification and default privacy settings for children’s accounts.

In April last year, TikTok was fined €345 million in Ireland for violating EU data law in its handling of children’s accounts. Additionally, the UK Information Commissioner fined the company £12.7 million for unlawfully processing data from children under 13.

Companies that violate the DSA can face fines of up to 6% of their global turnover. TikTok is owned by Chinese technology company ByteDance.

TikTok has stated that it is committed to working with experts and the industry to ensure the safety of young people on its platform and is eager to brief the European Commission on its efforts.

The commission is also examining alleged deficiencies in TikTok’s provision of publicly available data to researchers and its compliance with requirements to establish a database of ads shown on the platform.

A deadline for the investigation has not been set and will depend on factors such as the complexity of the case and the degree of cooperation from the companies being investigated.

This investigation of TikTok is the DSA’s second, following a December 2021 formal investigation into Elon Musk’s social media platform X, which was previously known as Twitter. The case against X focuses on failure to block illegal content and inadequate measures against disinformation.

Apple is reportedly facing a substantial fine from the EU for its conduct in the music streaming app market. The European Commission is investigating whether US tech companies blocked music distributors from informing users about cheaper subscription options outside of their own app stores.

According to the Financial Times, the city of Brussels plans to fine Apple 500 million euros, marking a significant decision following years of complaints from companies offering services through iPhone apps.

Apple was previously fined 1.1 billion euros by France in 2020 for anti-competitive agreements with two wholesalers, a fine that was later reduced by an appeals court.

Big technology companies like Apple and Google have come under increased scrutiny due to competitive concerns. Google is appealing against fines of more than 8 billion euros imposed by the EU in three separate competition investigations.

Apple has successfully defended against a lawsuit by Fortnite developer Epic Games alleging that its app store was an illegal monopoly. In December, Epic won a similar lawsuit against Google.

Last month, Apple announced that it would allow EU customers to download apps without using its own app store, in response to the EU’s digital market law.

Source: www.theguardian.com

eBay to lay off 1,000 employees in letter to staff from online retailer

eBay, an online retailer, has announced that it will cut around 1,000 roles, which is an estimated 9% of its current workforce. eBay CEO Jamie Iannone stated in a letter to employees, “While we are making progress in line with our strategy, our overall headcount and expenses are outpacing business growth.” He added, “To address this, we are implementing organizational changes to align and integrate certain teams to improve the end-to-end experience and better meet the needs of our customers around the world.”


In addition to the job cuts, the company plans to reduce the number of “in-term” contracts. Iannone added, “alternative workforce.” He also stated that company administrators would notify employees whose roles were “eliminated” and asked all eBay staff to work from home on Wednesday “to ensure space and privacy for conversations.” He added, “We recognize that these actions are not something we take lightly and they impact all eBayers. We must say goodbye to people who have made many important contributions to the eBay community and culture, and this is not an easy task.” Last February, eBay laid off 500 employees, 4% of its workforce worldwide, citing a slowdown in consumer spending for the boom in e-commerce spending during the pandemic.

The number of layoffs within Silicon Valley has accelerated recently, with some of the world’s most prominent technology companies instituting large-scale layoff programs in recent months. A memo sent by Google CEO Sundar Pichai earlier this month warned staff that more job cuts could occur this year as the company looks to increase investment in artificial intelligence. The company will cut its workforce by 12,000 in early 2023. This comes after Mark Zuckerberg’s “meta” revealed in March last year that the company plans to cut 10,000 jobs from a peak of 87,000 employees in 2022. This month, language learning app Duolingo also lost about 10% of its contract employees as part of the company’s move to increase its reliance on AI.

Amazon cut hundreds of jobs across its streaming platform Twitch and its film and TV studio division in the second week of January. In December, music streaming service Spotify announced plans to cut 17% of its workforce, which equates to about 1,500 fewer employees.

According to data, more than 13,000 people have been laid off at 72 companies so far this year, according to layoffs.fyi, which tracks job losses in the tech industry.

Source: www.theguardian.com

Meta Report Reveals 100,000 Children Experience Daily Sexual Harassment on Online Platforms

According to an internal document released late Wednesday, Meta estimates that about 100,000 children on Facebook and Instagram are subjected to online sexual harassment every day, including “pictures of adult genitalia.” The unsealed legal filings include several allegations against Meta, based on information the New Mexico Attorney General’s Office learned from presentations and communications between Meta employees. These allegations describe an incident in 2020 in which the 12-year-old daughter of an Apple executive was solicited via Instagram’s messaging product, IG Direct.

In testimony before the US Congress late last year, a senior Meta employee described how his daughter was recruited through Instagram. His efforts to resolve the issue were ignored, he said. This suit is the latest in a series of lawsuits filed by the New Mexico Attorney General’s Office on December 5, alleging that Meta’s social network has become a marketplace for child predators. The state’s attorney general, Raul Torrez, accused Meta of allowing adults to find, send messages to, and groom children. Meta released a statement in response to Wednesday’s filing, stating, “We want to provide teens with a safe and age-appropriate online experience, and we have over 30 tools to support them and their parents.”

The lawsuit also referenced a 2021 internal presentation on child safety, in which Meta states that it has “poorly invested in the sexual expression of minors on IG, with significant sexual commentary on content posted by minors.” The complaint also highlights Meta employees’ concerns about the safety of children. Meta’s statement also said the company “has taken significant steps to prevent unwanted contact from teens, especially adults.”

The New Mexico lawsuit follows a Guardian investigation in April that revealed how Meta failed to report or detect the use of its platform for child trafficking. According to documents included in the lawsuit, Meta employees “coordinate human trafficking operations” and ensure that “every step of human exploitation (recruitment, conditioning, and exploitation) is expressed on our platform.” But an internal email from 2017 said executives opposed scanning Facebook Messenger for “harmful content,” citing the service’s desire to “provide more privacy.” In December, Meta received widespread criticism for introducing end-to-end encryption for messages sent via Facebook and Messenger.

Source: www.theguardian.com

Lawyer Exposes: US Police Allegedly Prevented Access to Numerous Online Child Sexual Abuse Reports

The Guardian has revealed that social media companies relying on artificial intelligence software to manage their platforms are producing unworkable reports on child sexual abuse cases, leaving U.S. police unable to uncover potential leads, which is delaying the investigation into suspected looters.

By law, U.S.-based social media companies are required to report child sexual abuse content detected on their platforms to the National Center for Missing and Exploited Children (NCMEC), which serves as a national clearinghouse for child abuse information and forwards information to relevant law enforcement agencies in the United States and around the world. The company said it received more than 32 million reports of suspected child sexual exploitation and approximately 88 million images, videos, and other files from businesses and the general public in 2022.

Meta is the largest reporter of this information, with over 27 million (84%) generated by Facebook, Instagram, and WhatsApp platforms in 2022. NCMEC is partially funded by the Department of Justice and also receives private sources of corporate donations.

Social media companies, including Meta, use AI to detect and report suspicious content on their sites and employ human moderators to send some flagged content to law enforcement. However, U.S. law enforcement agencies can only disclose AI-generated child sexual abuse material (CSAM) by serving a search warrant on a company that has filed a report, which can add days or even weeks to the investigation process.

“If a company reports a file to NCMEC and does not indicate that it viewed the file before reporting, we will not be able to open the file,” said Staka Shehan, vice president of analytical services at NCMEC.

To protect your privacy under the Fourth Amendment, neither law enforcement officials nor the federally funded NCMEC will issue a search warrant unless the contents of the report are clear and first reviewed by a social media company representative.

NCMEC staff and law enforcement agencies cannot legally see the content of AI-generated content that is not seen by humans, which can stall investigations into suspected predators for several weeks, resulting in the loss of evidence that may be possible to connect.

“Any delay [in viewing the evidence] “The longer criminals go undetected, the more detrimental it is to ensuring community safety,” said an assistant U.S. attorney in California, who spoke on condition of anonymity. “They are dangerous to all children.”

In December, the New Mexico Attorney General’s Office filed a lawsuit against Meta, alleging that its social network has become a marketplace for child predators and that Meta has repeatedly failed to report illegal activity on its platform. woke up. In response, Meta said its priority was to combat child sexual abuse content.

The state attorney general laid the blame for the fight to send actionable information at the feet of Meta. “Reports showing the inefficiency of the company’s AI-generated cyber information systems prove what we said in the complaint,” Raul Torrez said in a statement to the Guardian.

To ensure the safety of children, keep parents informed, and enable law enforcement to effectively investigate and prosecute online sex crimes against children, the company is reforming, staffing levels, and policies. , it’s long past time to implement algorithmic changes,” Torrez added.

Despite legal limitations on moderation AI, social media companies are likely to increase its use in the near future. In 2023, OpenAI, developer of ChatGPT, announced they claimed that large-scale language models can do the job of human content moderators and have roughly the same accuracy.

However, child safety experts say that the AI software used by social media companies to moderate content already knows the digital fingerprints of images, known as hashes, and that the AI software used by social media companies to moderate content cannot be used to detect known cases of child sexual abuse. It claims to be effective only when identifying images of Lawyers interviewed said AI would be ineffective when newly created images or when known images or videos are altered.

“There is always concern about cases involving newly identified victims, and because they are new, the materials do not have a hash value,” said the director of the Zero Abuse Project, a nonprofit organization focused on combating child abuse.
said senior lawyer Kristina Korobov. . “If humans were doing the work, there would be more discoveries of newly discovered victims.”

In the US, please call or text us. child help Abuse Hotline 800-422-4453 or visit
their website If you need more resources, please report child abuse or DM us for help. For adult survivors of child abuse, support is available at the following link:
ascasupport.org. In the UK,
NSPCC Support for children is available on 0800 1111 and adults who are concerned about a child can call 0808 800 5000. National Association of Child Abuse (
napak) offers support to adult survivors on 0808 801 0331. In Australia, children, young people, parents and teachers can contact the Kids Helpline on 1800 55 1800.
brave hearts Adult survivors can contact 1800 272 831
blue knot foundation 1300 657 380. Additional sources of help can be found at:
Child Helpline International

Source: www.theguardian.com

Apple halts online sales of Watch Series 9 and Ultra 2

As promised, Apple has officially removed the Watch Series 9 from its online shop. News of the surprise move came earlier this week, when the company admitted that an ongoing patent dispute had forced it to temporarily suspend sales of its flagship smartwatch. When you click to visit the site, instead of a “Buy” button you’ll see the words “Currently Unavailable.”

Apple Watch Ultra 2 is similarly unavailable. However, you can purchase his entry-level Apple Watch SE. This is likely due to the product’s relatively limited onboard health metrics. You can also get Series 9 through other online sources. Amazon, for example, still promises pre-Christmas delivery in some areas.

The wearable device will continue to be available at brick-and-mortar Apple Stores until Christmas Eve. If you’ve already ordered the watch online and want to pick it up in-store, you can still do so until December 24th, the company confirmed to TechCrunch.

The patent battle between Apple and health tech company Masimo has been well documented over the past few years. But even though Apple lost some important rulings, Monday’s announcement still came as a surprise to many given its unprecedented nature.

In a statement provided to TechCrunch earlier this week, the company said it intends to continue to contest the decision.

A presidential review period is underway regarding an order from the U.S. International Trade Commission regarding a technical intellectual property dispute regarding Apple Watch devices with blood oxygen capabilities. The review period doesn’t end until December 25th, but Apple is taking preemptive steps to comply if the ruling stands. This includes suspending sales of the Apple Watch Series 9 and Apple Watch Ultra 2 on Apple.com starting December 21st, and at Apple retail stores starting December 24th.

Apple’s teams work hard to develop products and services that provide industry-leading health, wellness, and safety features for our users. Apple strongly opposes this order and is pursuing various legal and technical options to ensure customers have access to Apple Watch.

If this order remains in effect, Apple will continue to take all steps to return Apple Watch Series 9 and Apple Watch Ultra 2 to U.S. customers as quickly as possible.

At the heart of the battle is an optical imaging sensor used to monitor the wearer’s heart rate. Apple adopted similar technology back in 2020 with the introduction of his Series 6. Among other things, Masimo accused the hardware giant of poaching key talent. Apple claims it has “begun hiring Masimo employees, starting with Masimo’s Chief Medical Officer.”

There’s never a perfect time to stop selling your biggest products, but the holiday season is especially problematic. Apple has been able to keep sales completely open, but some people may be opening rain checks this year.

Source: techcrunch.com

EU Identifies Three Porn Sites Subject to Stricter Online Content Regulations

Age verification technology could be heading to adult content sites after these three sites were added to the list of platforms subject to the most stringent level of regulation under the European Union’s Digital Services Act (DSA).

Back in April, the EU announced an initial list of 17 so-called Very Large Online Platforms (VLOPs) and two Very Large Online Search Engines (VLOSEs) designated under the DSA. The initial list did not include adult content sites. The addition of the three platforms specified today changes that.

According to Wikipedia — which, ironically, was already named VLOP in the first wave or commission designation — XVideos and Pornhub are the world’s No. 1 and No. 2 most-visited adult content sites. Stripchat, on the other hand, is an adult webcam platform that live streams nude performers.

None of the three services currently require visitors to undergo a strict age check (i.e. age verification rather than self-declaration) before accessing their content, but all three services As a result, this area is subject to change.

As the EU points out in its report, pan-EU regulations require designated (large) platforms with an average monthly user base of more than 45 million people in the region to have a number of restrictions, including obligations to protect minors. It imposes additional obligations. press release Today — writing [emphasis ours]: “VLOPs must design services, including interfaces, recommendation systems, and terms of use, to address and prevent risks to child welfare. Relax measures to protect children’s rights and prevent minors from accessing pornographic content online (such as age verification tools)

The European Commission, which is responsible for overseeing VLOPs’ compliance with the DSA, today reiterated that creating a safer online environment for children is an enforcement priority.

Other DSA obligations for VLOPs include:They are required to produce a risk assessment report on the “specific systemic risks” that their services may pose in relation to the dissemination of illegal content and content that threatens fundamental rights. It must first be shared with the committee and then published.

and to address the risks associated with the online dissemination of illegal content, such as child sexual abuse material (CSAM), and content that affects fundamental rights, such as human dignity and the right to private life in the absence of consent. , mitigation measures must also be applied. Sharing intimate content or deepfake pornography online.

“These measures may include, among other things, adaptations to terms of use, interfaces, moderation processes, algorithms, etc.,” the Commission notes.

The three adult platforms designated as VLOPs have four months to bring their services into compliance with additional DSA requirements. That means we need time until late April to make the necessary changes, such as rolling out age verification technology.

“The European Commission’s services will closely monitor compliance with the DSA obligations by these platforms, in particular with regard to measures to protect minors from harmful content and to combat the spread of illegal content,” the EU said. , further added: Please work closely with your newly designated platforms to ensure these are addressed appropriately. ”

The DSA also contains a set of more broadly applicable general obligations that apply not only to small-scale digital services but also to VLOPs. For example, ensuring that systems are designed to ensure high levels of privacy, safety and child protection. Promptly notify law enforcement authorities if they become aware of information that gives rise to suspicion of a criminal offense involving a threat to the life or safety of a person, including in cases of child sexual abuse, and compliance with these requirements; Notice deadline will start slightly earlier on February 17, 2024.

The DSA applies across the EU and EEA (European Economic Area), but post-Brexit this region will not include the UK. However, this autumn the UK government passed its own Online Safety Act (OSA), establishing communications regulator Ofcom as the country’s internet content watchdog and introducing a system of harsher penalties for breaches than the EU’s (OSA fines). (can amount to up to 10%) of global annual sales versus up to 6% based on the EU DSA).

UK law also focuses on child protection. And recent Ofcom guidance for porn sites, aimed at helping them comply with new legal obligations to prevent minors from encountering adult content online, says they are “highly effective”. It states that age checks must be conducted, and further specifies that such checks cannot include age gates that simply ask users to self-declarate that they are 18 years of age or older. .

Ofcom’s list of age verification technologies approved in the UK includes provisions such as asking porn site users to upload a copy of their passport to verify their age. Show your face to the webcam to receive an AI age assessment. Alternatively, there are methods that regulators deem acceptable, such as signing into Open Banking and proving that you are not a minor.

Source: techcrunch.com