Ofcom Calls on Social Media Platforms to Combat Fraud and Curb Online ‘Pile-Ons’

New guidelines have urged social media platforms to curtail internet “pile-ons” to better safeguard women and girls online.

Ofcom, Britain’s communications regulator, implemented guidance on Tuesday aimed at tackling misogynistic abuse, coercive control, and the non-consensual sharing of intimate images, with a focus on minimizing online harassment of women.

The measures imply that tech companies will limit the number of responses to posts on platforms like X, a strategy Ofcom believes will lessen incidents where individual users are inundated with abusive responses.


Additional measures proposed by Ofcom include utilizing databases of images to prevent the non-consensual sharing of intimate photos—often referred to as ‘revenge porn’.

The regulator advocates for “hash matching” technology that helps platforms remove disputed images. This system cross-references user-reported images or videos with a database of illegal content, transforming them into “hashes” or digital identifiers, enabling the identification and removal of harmful images.

These recommendations were put forth under the Online Safety Act (OSA), a significant law designed to shield children and adults from harmful online content.

While the advice is not obligatory, Ofcom is urging social media companies to follow it, announcing plans to release a report in 2027 assessing individual platforms’ responses to the guidelines.

The regulator indicated that the OSA could be reinforced if the recommendations are not acted upon or prove ineffective.

“If their actions fall short, we will consider formally advising the government on necessary enhancements to online safety laws,” Ofcom stated.

Dame Melanie Dawes, Ofcom’s chief executive, has encountered “shocking” reports of online abuse directed at women and girls.


Melanie Dawes, Ofcom’s chief executive. Photo: Zuma Press Inc/Alamy

“We are sending a definitive message to tech companies to adhere to practical industry guidance that aims to protect women from the genuine online threats they face today,” Dawes stated. “With ongoing support from our campaigners, advocacy groups, and expert partners, we will hold companies accountable and establish new benchmarks for online safety for women and girls in the UK.”

Ofcom’s other recommendations suggest implementing prompts to reconsider posting abusive content, instituting “time-outs” for frequent offenders, and preventing misogynistic users from generating ad revenue related to their posts. It will also allow users to swiftly block or mute several accounts at once.

These recommendations conclude a process that started in February, when Ofcom conducted a consultation that included suggestions for hash matching. However, more than a dozen guidelines, like establishing “rate limits” on posts, are brand new.

Internet Matters, a nonprofit organization dedicated to children’s online safety, argued that governments should make the guidance mandatory, cautioning that many tech companies might overlook it. Ofcom is considering whether to enforce hash matching recommendations.

Rachel Huggins, co-chief executive of Internet Matters, remarked: “We know many companies will disregard this guidance simply because it is not legally binding, leading to continued unacceptable levels of online harm faced by women and girls today.”

Source: www.theguardian.com

Roblox Controversy: Experts and MPs Urge Online Gaming Platforms to Embrace Australia’s Under-16 Social Media Ban

Increasing concerns have been raised regarding the federal government’s need to tackle the dangers that children face on the widely-used gaming platform Roblox, following a report by Guardian Australia that highlighted a week of incidents involving virtual sexual harassment and violence.

While role-playing as an 8-year-old girl, the reporter encountered a sexualized avatar and faced cyberbullying, acts of violence, sexual assault, and inappropriate language, despite having parental control settings in place.

From December 10, platforms including Instagram, Snapchat, YouTube, and Kick will be under Australia’s social media ban preventing Australians under 16 from holding social media accounts, yet Roblox will not be included.

Independent councillor Monique Ryan labeled this exclusion as “unexplainable.” She remarked, “Online gaming platforms like Roblox expose children to unlimited gambling, cloned social media apps, and explicit content.”

At a press conference on Wednesday, eSafety Commissioner Julie Inman Grant stated that platforms would be examined based on their “singular and essential purpose.”

“Kids engaging with Roblox currently utilize chat features and messaging for online gameplay,” she noted. “If online gameplay were to vanish, would kids still use the messaging feature? Likely not.”

Sign up: AU breaking news email

“If these platforms start introducing features that align them more with social media companies rather than online gaming ones, we will attempt to intervene.”

According to government regulations, services primarily allowing users to play online games with others are not classified as age-restricted social media platforms.


Nonetheless, some critics believe that this approach is too narrow for a platform that integrates gameplay with social connectivity. Nyusha Shafiabadi, an associate professor of information technology at Australian Catholic University, asserts that Roblox should also fall under the ban.

She highlighted that the platform enables players to create content and communicate with one another. “It functions like a restricted social media platform,” she observed.

Independent MP Nicolette Boere urged the government to rethink its stance. “If the government’s restrictions bar certain apps while leaving platforms like Roblox, which has been called a ‘pedophile hellscape’, unshielded, we will fail to safeguard children and drive them into more perilous and less regulated environments,” she remarked.

Communications minister spokesperson Annika Wells mentioned that excluding Roblox from the teen social media ban does not imply that it is free from accountability under the Online Safety Act.

A representative from eSafety stated, “We can extract crucial safety measures from Roblox that shield children from various harms, including online grooming and sexual coercion.”

eSafety declared that by the year’s end, Roblox will enhance its Age Verification Technology, which restricts adults from contacting children without explicit parental consent and sets accounts to private by default for users under 16.

“Children under 16 who enable chat through age estimation will no longer be permitted to chat with adults. Alongside current protections for those under 13, we will also introduce parental controls allowing parents to disable chat for users between 13 and 15,” the spokesperson elaborated.

Should entities like Roblox not comply with child safety regulations, authorities have enforcement capabilities, including fines of up to $49.5 million.

Skip past newsletter promotions

eSafety stated it will “carefully oversee Roblox’s adherence to these commitments and assess regulatory measures in the case of future infractions.”

Joanna Orlando, an expert on digital wellbeing from Western Sydney University, pointed out that Roblox’s primary safety issues are grooming threats and the increasing monetization of children engaging with “the world’s largest game.”

She mentioned that it is misleading to view it solely as a video game. “It’s far more significant. There are extensive social layers, and a vast array of individuals on that platform,” she observed.

Green Party spokesperson Sarah Hanson-Young criticized the government for “playing whack-a-mole” with the social media ban.

“We want major technology companies to assume responsibility for the safety of children, irrespective of age,” she emphasized.

“We need to strike at these companies where it truly impacts them. That’s part of their business model, and governments hesitate to act.”

Shadow communications minister Melissa Mackintosh also expressed her concerns about the platform. She stated that while Roblox has introduced enhanced safety measures, “parents must remain vigilant to guard their children online.”

“The eSafety Commissioner and the government carry the responsibility to do everything within their power to protect children from the escalating menace posed by online predators,” she said.

A representative from Roblox stated that the platform is “dedicated to pioneering safety through stringent policies that surpass those of other platforms.”

“We utilize AI to scrutinize games for violating content prior to publication, we prohibit users from sharing images or videos in chats, and we implement sophisticated text filters designed to prevent children from disclosing personal information,” they elaborated.




Source: www.theguardian.com

Amazon Web Services Outage Disrupts Global Platforms, Shows “Signs of Recovery”

A significant internet disruption has impacted numerous websites and applications globally, with users experiencing difficulties connecting to the internet due to issues with Amazon’s cloud computing service.

The affected services include Snapchat, Roblox, Signal, and Duolingo, along with various Amazon-owned enterprises, including major retail platforms and the Ring doorbell company.

In the UK, Lloyds Bank and its associated brands, Halifax Bank and Bank of Scotland, were impacted, with HM Revenue & Customs also facing challenges accessing their website on Monday morning. Additionally, Ring users in the UK reported non-functioning doorbells on social media.

In the UK alone, there were tens of thousands of reports concerning issues with individual applications across various platforms. Other affected services include Wordle, Coinbase, Slack, Pokémon Go, Epic Games, PlayStation Network, and Peloton.

By 10:30am UK time, Amazon indicated that the issues, which began around 8am, were being addressed, as AWS showed “significant signs of recovery.” At 11 a.m., they confirmed that global services linked to US-EAST-1 had also been restored.

Amazon reported that the problems originated from Amazon Web Services on the East Coast of the U.S. AWS, which is a division providing essential web infrastructure and renting out server space, is the largest cloud computing platform worldwide.

Shortly after midnight (8am BST) in the U.S., Amazon acknowledged “increased error rates and latencies” for its AWS services in the East Coast region. This issue seems to have caused a worldwide ripple effect, as the Downdetector site logged problems from multiple continents.

Cisco’s Thousand Eyes service track internet outages reported a surge in problems on Monday morning, particularly in Virginia, where Amazon’s US-East-1 region is based, noting that AWS confirmed the start of the issues.

Leif Pilling, director of threat intelligence at cybersecurity firm Sophos, stated that the outage seems to be an IT-related issue rather than a cyberattack. The AWS Online Health Dashboard identified problems with DynamoDB, a database system facilitating data access for websites.

“During events like this, it’s natural for concerns of a cyber incident to arise,” he noted. “Given AWS’s extensive and complex footprint, any issue can trigger considerable disruption. It appears that this incident originates from an IT problem on the database side, which AWS prioritizes resolving promptly.”

Dr. Colin Cass Speth, head of digital at human rights organization Article 19, pointed out that the outage underscores the risks of concentrating digital infrastructure in the hands of a few providers.

“There is an urgent need to diversify cloud computing. The infrastructure supporting democratic discourse, independent journalism, and secure communication should not rely solely on a handful of companies,” she stated.

The British government reported that it was in touch with Amazon concerning the internet disruption on Monday.

A spokesperson remarked: “We are aware of an incident affecting Amazon Web Services and several online services dependent on its infrastructure. Through our established incident response structure, we are in communication and working to restore services as quickly as possible.”

Source: www.theguardian.com

Signs of Trouble: Preventing Counterfeit Scams on Vinted and Other Resale Platforms

WMaheen was thrilled to discover a new Dyson Airwrap listed on the resale website Vinted for an attractive £260. The seller had a stellar 5-star rating, and she felt confident in the buyer protection policy should any issues arise.

Airwraps are typically priced between £400 and £480 when bought new, but Maheen didn’t suspect anything amiss. “I’ve used Vinted numerous times and found it straightforward and hassle-free. I’ve never faced any problems,” she states.

However, after two weeks and roughly four uses, she noticed a troubling sign. “I saw the wires beginning to smoke, and the device felt unsafe,” she explains. Maheen reached out to Dyson and was instructed to send the Airwrap in for inspection.

The news confirmed her worst fears. “I received a letter from [Dyson] informing me that the product is counterfeit. They wouldn’t return it to me as it posed a danger,” she shares.

Maheen’s experience is not isolated. Almost 37% of individuals in the UK have encountered scams while engaging in online marketplaces like Facebook Marketplace, eBay, and Vinted, according to research by credit reference agency Experian.

Younger individuals are particularly prone to this type of fraud, with over half (58%) of Gen Z respondents indicating they have fallen victim to scams, contrasted with only 16% of those older than 55.

Nearly a quarter of victims reported losses ranging from £51 to £100, while 13% faced losses exceeding £250. A small fraction indicated that their losses fell between £501 and £1,000.

The most prevalent type of fraud encountered was receiving counterfeit goods (34%), the same fate that befell Maheen. This was followed by sellers requesting payment outside the platform (31%) and items not being delivered after payment (22%).

Scam Scene

It may appear to be a genuine product, with descriptions providing a convincing facade. Over half (51%) of fraud victims told Experian that they only realized they were scammed after the item was delivered and was found to be fake, or if the item never arrived.

The images might be sourced from other websites, potentially low-resolution or resembling catalog photos.

The price could be set lower than expected; if you begin asking questions, the seller may rush you into making a purchase and propose payment outside of the Vinted platform.

What to Do

Always diligently review the seller’s profile and read customer feedback before making any purchases on the marketplace. Aim to gather as much information as possible regarding the product prior to buying. For instance, request sellers to provide videos of their items. To safeguard yourself, utilize secure payment methods and refrain from making bank transfers.

In the unfortunate event of a scam, report it to the marketplace and seek a refund. You may need to provide a screenshot of the conversation, details about the seller or buyer, and potentially bank transfer documentation.

Although Maheen’s two-day buyer protection period on Vinted had elapsed, she believed she would reclaim her money since the item was hazardous. Nevertheless, she found it “incredibly difficult to communicate with them.”

She remarks: “It felt like I was conversing with a bot.”

With assistance from Guardian Money, she has now received her refund.

A representative from Vinted stated: “The vast majority of transactions on Vinted are successful, and our team is actively working to ensure a smooth trading experience for all members.”

“When disputes occur between buyers and sellers, we collaborate closely with our shipping partners, occasionally seeking further information or evidence to mediate before reaching a final decision.”

If appealing directly to the marketplace is unsuccessful, there are alternative steps you can take.

If you used a debit card, consider requesting a chargeback from your bank. If you paid via credit card, explore the option of a Section 75 charge, which is only applicable for purchases exceeding £100. For bank transfers, the process may be more complex, but you could be eligible for a refund using a new method. Fraud Refund Protection.

Source: www.theguardian.com

Warning: Far-Right Extremists Using Gaming Platforms to Radically Influence Teens

The report indicates that far-right extremists are leveraging livestream gaming platforms to recruit and radicalize teenagers.

Recent research published in the journal Frontiers of Psychology reveals how various extremist groups are utilizing chats and live streams during video games to attract and radicalize mainly young men and vulnerable users.

The UK counter-crime and terrorism agency is urging parents to remain vigilant as online criminals specifically target youth during the summer break.

In an unprecedented step, last week, the counter-terrorism police, MI5, and the National Crime Agency issued a joint alert to parents and guardians that online perpetrators would “exploit school holidays to engage in criminal activities with young people when they know that less support is readily available.”

Dr. William Allshan, a senior researcher at the Institute for International Police and Public Conservation at Anglia Ruskin University, who conducted this study with her colleague Dr. Elisa Orofino, stated that the “game adjacency” platform is being used as a “digital playground” for extremist activities.


AllChorn has found that extremists have intentionally redirected teenagers from mainstream social media platforms to these gaming sites.

The most prevalent ideology among extremist users was far-right, which glorifies extreme violence and shares content related to school shootings.

Felix Winter, who threatened to execute a mass shooting at a school in Edinburgh on Tuesday, was sentenced to six years after the court revealed that the 18-year-old had been “radicated” online and spent over 1,000 hours interacting with a pro-Nazi group.

AllChorn noted a significant increase in coordinated efforts by far-right groups like patriotic alternatives to recruit youth through gaming events that arose during the lockdown. However, since that time, individuals have been concealing themselves in public groups or channels on Facebook and Discord, as many extremist factions have been pushed out of mainstream platforms.

He further explained that younger users might gravitate towards extreme content for its shock value among peers, which could render them susceptible to being targeted.

Extremists have had to adapt their methods, as most platforms have banned them, Allchorn said. “We consulted with local community safety teams, and they emphasized the importance of building trust rather than overtly promoting ideologies.”

This research was also deliberated upon with moderators. Moderators expressed concerns regarding inconsistent enforcement policies on the platforms and their burden of deciding whether to report certain content or users to law enforcement.

While in-game chats are not specifically moderated, moderators reported being overwhelmed by the sheer volume and complexity of harmful content, including the use of coded symbols to bypass automated moderation tools.

Allchorn emphasized the importance of digital literacy for parents and law enforcement so they may better grasp how these platforms and their subcultures function.

Last October, MI5’s head Ken McCallum revealed that “13% of all individuals being investigated by MI5 for terrorism-related activities in the UK are under the age of 18.”

AI tools are employed to assist in moderation but often struggle with interpreting memes or when language is unclear or sarcastic.

Source: www.theguardian.com

Enforcement of Australia’s Social Media Ban for Users Under 16: Which Platforms Are Exempt?

Australians engaging with various social media platforms like Facebook, Instagram, YouTube, Snapchat, X, and others should verify that they are over 16 years old ahead of the upcoming social media ban set to commence in early December.


Beginning December 10th, new regulations will come into effect for platforms defined by the government as “age-restricted social media platforms.” These platforms are intended primarily for social interactions involving two or more users, enabling users to share content on the service.

The government has not specified which platforms are included in the ban, implying that any site fitting the above criteria may be affected unless it qualifies for the exemptions announced on Wednesday.

Prime Minister Anthony Albanese noted that platforms covered by these rules include, but aren’t limited to, Facebook, Instagram, X, Snapchat, and YouTube.

Communications Minister Annika Wells indicated that platforms are anticipated to disable accounts for users under 16 and implement reasonable measures to prevent younger individuals from creating new accounts, verifying their age, and bypassing established restrictions.


What is an Exemption?

According to the government, a platform will be exempt if it serves a primary purpose other than social interaction.

  • Messaging, email, voice, or video calling.

  • Playing online games.

  • Sharing information about products or services.

  • Professional networking or development.

  • Education.

  • Health.

  • Communication between educational institutions and students or their families.

  • Facilitating communication between healthcare providers and their service users.

Determinations regarding which platforms meet the exemption criteria will be made by the eSafety Commissioner.

In practice, this suggests that platforms such as LinkedIn, WhatsApp, Roblox, and Coursera may qualify for exemptions if assessed accordingly. LinkedIn previously asserted that the government’s focus is not on children.


Hypothetically, platforms like YouTube Kids could be exempt from the ban if they satisfy the exemption criteria, particularly as comments are disabled on those videos. Nonetheless, the government has yet to provide confirmation, and YouTube has not indicated if it intends to seek exemptions for child-focused services.


What About Other Platforms?

Platforms not named by the government and that do not meet the exemption criteria should consider implementing age verification mechanisms by December. This includes services like Bluesky, Donald Trump’s Truth Social, Discord, and Twitch.


How Will Tech Companies Verify Users Are Over 16?

A common misunderstanding regarding the social media ban is that it solely pertains to children. To ensure that teenagers are kept from social media, platforms must verify the age of all user accounts in Australia.

There are no specific requirements for how verification should be conducted, but updates from the Age Assurance Technology Trial will provide guidance.

The government has mandated that identity checks can be one form of age verification but is not the only method accepted.

Australia is likely to adopt an approach for age verification comparable to that of the UK, initiated in July. This could include options such as:

  • Requiring users to be 18 years of age or older to allow banks and mobile providers access to their users.

  • Requesting users to upload a photo to match with their ID.

  • Employing facial age estimation techniques.

Moreover, platforms may estimate a user’s age based on account behavior or the age itself. For instance, if an individual registered on Facebook in 2009, they are now over 16. YouTube has also indicated plans to utilize artificial intelligence for age verification.


Will Kids Find Workarounds?

Albanese likened the social media ban to alcohol restrictions, acknowledging that while some children may circumvent the ban, he affirmed that it is still a worthwhile endeavor.

In the UK, where age verification requirements for accessing adult websites were implemented this week, there has been a spike in the use of virtual private networks (VPNs) that conceal users’ actual locations, granting access to blocked sites.

Four of the top five free apps in the UK Apple App Store on Thursday were VPN applications, with the most widely used one, Proton, reporting an 1,800% increase in downloads.


The Australian government expects platforms to implement “reasonable measures” to address how teenagers attempt to evade the ban.


What Happens If a Site Does Not Comply With the Ban?

Platforms failing to implement what eSafety members deem “reasonable measures” to prevent children from accessing their services may incur fines of up to $49.5 million, as determined in federal court.

The definition of “reasonable measures” will be assessed by committee members. When asked on Wednesday, Wells stated, “I believe a reasonable step is relative.”

“These guidelines are meant to work, and any mistakes should be rectified. They aren’t absolute settings or rules, but frameworks to guide the process globally.”


Source: www.theguardian.com

How Platforms Lobby for Exemptions Amid the Ban on Aussie Teen Social Media

The Australian government is rapidly identifying which social media platforms will face restrictions for users under 16.

Social Services Minister Tanya Plibersek stated on Monday that the government “will not be intimidated by the actions of social media giants.” Nevertheless, tech companies are vigorously advocating for exemptions from the law set to take effect in December.

Here’s what social media companies are doing to support their case:


The parent company of Facebook and Instagram has introduced new Instagram teen account settings to signal their commitment to teenage safety on the platform.

Recently, Meta revealed New protections, which aim to enhance direct message security by automatically censoring nude images and implementing blocking features.

Additionally, Meta hosted a “Screen Smart” safety event in Sydney targeted at “Parent Creators,” led by Sarah Harris.

Sign up: AU Breaking News Email

YouTube

YouTube’s approach is even more assertive. Last year, Communications Minister Michelle Roland suggested the platform would be exempt from social media restrictions.

However, last month, the Esafety Commissioner advised the government to reconsider this exemption, citing research indicating that children often encounter harmful materials on YouTube.

Since then, the company has escalated its lobbying efforts, including full-page advertisements claiming YouTube can be used by “everyone,” alongside a letter sent to Communications Minister Anica Wells warning of a potential high court challenge if YouTube is subjected to the ban.


YouTube advertisement campaign opposing social media restrictions set to commence in December. Photo: Michael Karendiane/Guardian

As reported by Guardian Australia last month, Google is hosting its annual showcase this week at the Capitol on Wednesday. There, content creators, including child musicians, who oppose the YouTube ban will likely express their views to politicians.

Last year’s event featured the Wiggles, who met with Roland. This meeting was mentioned in a letter sent to Rowland last year when YouTube’s global CEO Neal Mohan requested the exemption within 48 hours of the promised relief.

Guardian Australia reported last week that YouTube met with Wells this month for an in-person discussion regarding the ban.

TikTok


Screenshots from TikTok’s advertisements highlighting its benefits for teenagers. Photo: TikTok

This month, TikTok is running ads on its platform as well as on Meta channels, promoting educational benefits for teens on vertical video platforms.

Skip past newsletter promotions

“The 1.7m #fishtok video encourages outdoor activities in exchange for screen time,” the advertisement states, acknowledging the government’s assertion that the ban would promote time spent outside. “They are developing culinary skills through cooking videos that have garnered over 13m views,” it continues.

“A third of users visit the STEM feed weekly to foster learning,” another ad claims.

Snapchat


Screenshot of Snapchat’s educational video about signs of grooming featuring Lambros army. Photo: Snapchat

Snapchat emphasizes user safety. In May, Guardian Australia reported on an instance involving an 11-year-old girl who added random users as part of a competition with her friend for high scores on the app.

This month, Snapchat announced a partnership with the Australian Federal Police-led Australian Centre to address child exploitation through a series of educational videos shared by various Australian influencers, along with advertisements advising parents and teens on identifying grooming and sextortion.

“Ensuring safety within the Snapchat community has always been our top priority, and collaborating closely with law enforcement and safety experts is crucial to that effort,” stated Ryan Ferguson, Australia’s Managing Director at Snap.

The platform has also reiterated account settings for users aged 13-17, including default private accounts and chat warnings when communicating with individuals who lack shared friends or are absent from contact lists.

Thus far, the government seems unyielding.

“It is undeniable that young people’s mental health has been adversely affected due to social media engagement, prompting the government’s actions,” Prime Minister Anthony Albanese told ABC insiders on Sunday.

“I will meet again with individuals who have faced tragedy this week… one concern expressed by some social media companies is our leadership on this matter, and we take pride in effectively confronting these threats.”




Source: www.theguardian.com

YouTube Revives Efforts to Include Platforms in Australia’s Under-16 Social Media Ban

YouTube has expressed its discontent with the nation’s online safety authorities for sidelining parents and educators, advocating to be included in the proposed social media restriction for users under 16.

Julie Inman Grant from the eSafety Commissioner’s office has called on the government to reconsider its choice to exclude video-sharing platforms from the age restrictions that apply to apps like TikTok, Snapchat, and Instagram.

In response, YouTube insists the government should adhere to the draft regulations and disregard Inman Grant’s recommendations.

“The current stance from the eSafety Commissioner offers inconsistent and contradictory guidance by attempting to ban previously acknowledged concerns,” remarked Rachel Lord, YouTube’s public policy and government relations manager.

“eSafety’s advice overlooks the perspectives of Australian families, educators, the wider community, and the government’s own conclusions.”

Inman Grant highlighted in her National Press Club address on Tuesday that the proposed age limits for social media would be termed “delays” rather than outright “bans,” and are scheduled to take effect in mid-December. However, details on how age verification will be implemented for social media users remain unclear, though Australians should brace for a “waterfall of tools and techniques.”

Guardian Australia reported that various social media platforms have voiced concerns over their lack of clarity regarding legal obligations, expressing skepticism about the feasibility of developing age verification systems within six months of the impending deadline.

Inman Grant pointed out that age verification should occur on individual platforms rather than at the device or App Store level, noting that many social media platforms are already utilizing methods to assess or confirm user ages. She mentioned the need for platforms to update eSafety on their progress in utilizing these tools effectively to ensure the removal of underage users.


Nevertheless, Inman Grant acknowledged the imperfections of the system. “For the first time, I’m aware that companies may not get it right. These technologies won’t solve everything, but using them in conjunction can lead to a greater rate of success.”

“The social media restrictions aren’t a panacea, but they introduce some friction into the system. This pioneering legislation aims to reduce harm for parents and caregivers and shifts the responsibility back to the companies themselves,” Inman Grant stated.

“We regard large tech firms as akin to an extraction industry. Australia is calling on these businesses to provide the safety measures and support we expect from nearly every other consumer industry.”

YouTube has committed to adhering to regulations outlined by former Communications Minister Michelle Rowland, who included specific exemptions for resources such as the Kids Helpline and Google Classroom to facilitate access to educational and health support for children.

Communications Minister Annika Wells indicated that a decision regarding the commissioner’s recommendations on the draft rules will be made within weeks, according to a federal source.

Skip past newsletter promotions

YouTube emphasized that its service focuses on video viewing and streaming rather than social interaction.

They asserted their position as a leader in creating age-appropriate products and addressing potential threats, denying any changes to policies that would adversely impact younger users. YouTube reported removing over 192,000 videos for violating hate speech and abuse policies just in the first quarter of 2025, and they have developed a product specifically designed for young children.

Lord urged that the government should maintain a consistent stance by not exempting YouTube from the restrictions.

“The eSafety advice contradicts the government’s own commitments, its research into community sentiment, independent studies, and perspectives from key stakeholders involved in this matter.”

Shadow Communications Minister Melissa Mackintosh emphasized the need for clarity regarding the forthcoming reforms from the government.

“The government must clarify the expectations placed on social media platforms and families to safeguard children from prevalent online negativity,” she asserted.

“There are more questions than answers regarding this matter. This includes the necessary verification techniques and those platforms will need to adopt to implement the minimum social media age standard by December 10, 2025.”

Source: www.theguardian.com

Competition regulator probes Apple and Google’s mobile platforms in the UK

The UK’s competition watchdog is set to investigate the impact of Apple and Google’s mobile platforms on consumers and businesses, following criticism over the appointment of a former tech executive as its new chair.

The Competition and Markets Authority (CMA) will look into the tech giants’ mobile operating systems, app stores, and browsers to determine if specific guidelines are needed to regulate their behavior.

This inquiry comes after Doug Gurr, a former Amazon UK country manager, was appointed as the CMA chair, with the government denying any bias towards big tech companies.

The investigation will focus on how Google and Apple’s mobile platforms impact consumers, businesses, and app developers, as most smartphones in the UK come with pre-installed iOS or Android operating systems.

The CMA will assess whether Google and Apple should be classified as companies with “strategic market positions” under the new Digital Markets, Competition and Consumers Act (DMCC). If designated as such, the CMA could impose regulatory requirements or mandate changes to promote competition on their platforms.

Sara Cardel, CEO of the CMA, emphasized the importance of mobile platforms as gateways to the digital world and highlighted the potential for a more competitive ecosystem to drive innovation and growth.

The CMA aims to complete its investigation by October 22nd, in line with its focus on ensuring consistent regulations that support economic growth and competition.

Both Apple and Google have expressed readiness to cooperate with the CMA and reiterated their commitment to fostering choice and opportunity for consumers and businesses while complying with regulations.

Source: www.theguardian.com

Ofcom demands social media platforms to adhere to online safety laws

Social media platforms are required to take action to comply with UK online safety laws, but they have not yet implemented all the necessary measures to protect children and adults from harmful content, according to the regulator.

Ofcom has issued a code of conduct and guidance for tech companies to adhere to in order to comply with the law, which includes the possibility of hefty fines and site closures for non-compliance.

Regulators have pointed out that many of the recommended actions have not been taken by the largest and most high-risk platforms.

John Higham, Director of Online Safety Policy at Ofcom, stated, “We believe that no company has fully implemented all necessary measures. There is still a lot of work to be done.”

All websites and apps covered by the law, including Facebook, Google, Reddit, and OnlyFans, have three months to assess the risk of illegal content appearing on their platforms. Safety measures must then be implemented to address these risks starting on March 17, with Ofcom monitoring progress.

The law applies to sites and apps that allow user-generated content, as well as large search engines covering over 100,000 online services. It lists 130 “priority crimes,” including child sexual abuse, terrorism, and fraud, which tech companies need to address by implementing moderation systems.

The new regulations and guidelines are considered the most significant changes to online safety policy in history according to Technology Secretary Peter Kyle. Tech companies will now be required to proactively remove illegal content, with the risk of heavy fines and potential site blocking in the UK for non-compliance.

Ofcom’s code and guidance include designating a senior executive responsible for compliance, maintaining a well-staffed moderation team to swiftly remove illegal content, and improving algorithms to prevent the spread of harmful material.

Platforms are also expected to provide easy-to-find tools for reporting content, with a confirmation of receipt and timeline for addressing complaints. They should offer users the ability to block accounts, disable comments, and implement automated systems to detect child sexual abuse material.

Child safety campaigners have expressed concerns that the measures outlined by Ofcom do not go far enough, particularly in addressing suicide-related content and making it technically impossible to remove illegal content on platforms like WhatsApp.

In addition to addressing fraud on social media, platforms will need to establish reporting channels for instances of fraud with law enforcement agencies. They will also work on developing crisis response procedures for events like the summer riots following the Southport murders.

Source: www.theguardian.com

UK Bill Could Mandate Social Media Platforms to Develop Less Addictive Content for Under-16s

Legislation supported by Labor, the Conservative Party, and child protection experts will require social media companies to exclude teenagers from algorithms intended to reduce content addiction in under-16s. This new Safer Telephones Bill, introduced by Labor MPs, prioritizes reviewing mobile phone sales to teenagers and potentially implementing additional safeguards for under-16s. Health Secretary Wes Street voiced support for the bill, citing the negative impact of smartphone addiction on children’s mental health.

The bill, championed by Labor MP Josh McAllister, is receiving positive feedback from ministers, although there is hesitation around banning mobile phone sales to teens. With backing from former Conservative education secretary Kit Malthouse and education select committee chair Helen Hayes, the bill aims to address concerns about children’s excessive screen time and exposure to harmful content.

Mr. McAllister’s bill, which focuses on protecting children from online dangers, will be debated by ministers this week. The bill includes measures to raise the Internet age of majority to 16 and give regulatory powers to Ofcom for children’s online safety. The proposed legislation has garnered support from various stakeholders including former children’s minister Claire Coutinho and children’s charities.

Concerns about the impact of smartphones on children’s well-being have prompted calls for stricter regulations on access to addictive online content. While Prime Minister Keir Starmer is against a blanket ban on mobile phones for under-16s, there are ongoing discussions about how to ensure children’s safety online without restricting necessary access to technology.

The bill aims to regulate online platforms and mobile phone sales to protect young people from harmful content and addiction. Mr. McAllister’s efforts in promoting children’s digital well-being have garnered significant support from policymakers and child welfare advocates.

As the government considers the implications of the bill and the Online Safety Act, which is currently pending full implementation, efforts to protect children from online risks continue to gain momentum. It remains crucial to strike a balance between enabling technology access and safeguarding children from potential online harms.

Source: www.theguardian.com

Did former Twitter users find what they were seeking on alternative platforms after quitting the app? | Social Media

“Bcontinue
@thread
“This week has felt like sitting on a half-empty train early in the morning as gradually more people board with horror stories of how awful the service is on the other line,” actor David Harewood wrote on Meta’s Twitter/X rival, which, judging by the number of “Hey, how does this work?” questions from newcomers, seems to be seeing echoes, at least in the UK, following last week’s far-right riots.

Newcomers to the thread might be wondering why it took so long. To say Elon Musk’s tenure as owner of the social network formerly known as Twitter and now renamed X has been outrageous would be a criminal understatement. Recent highlights include the unbanning of numerous far-right and extremist accounts, as well as his own misinformation campaign regarding far-right anti-immigrant riots in the UK.

Before Musk bought the company in 2022, few alternatives to Twitter existed, but several have emerged in the past few years. Today, there are the generally left- and liberal-leaning Blue Sky and Mastodon, the right-leaning Gab, and Donald Trump’s Truth Social Network.

But perhaps the biggest threat to X is Threads, in part because it was launched by Meta, the giant behind Facebook, Instagram and WhatsApp. But a simple question remains: is Threads any good?

For Satnam Sanghera, an author and journalist, the reason for the move is simple: “This place is corroding the very fabric of British society so I am trying to avoid it as much as possible and hoping it will be regulated,” he explained in a direct message on X. “Systemic abuse has been an issue for me, and for many people of colour, for years.”

But the force behind the switch is not so much the allure of Threads, a popular new social network, but the power to drive people away from X. “Threads has some great things, especially the fact that it links with Instagram, which is probably the most convenient social media platform,” Sanghera says. “But a lot of my loved ones aren’t on it. I’m hoping that will change, or maybe it’s just that it’s time to quit social media altogether.”

The integration with Instagram allows Insta users to open a Threads account with just a few clicks, which seems to have really accelerated Threads’ growth. Threads hit the milestone of 200 million active users earlier this month, just one year after its initial release. In comparison, Bluesky has just 6 million registered accounts and 1.1 million active users, while Mastodon has 15 million registered users, but no public data on active users.




Social media outlet Bluesky is one of X’s current alternatives. Photo: Jaap Arrians/NurPhoto/Shutterstock

“Threads has one big advantage,” says Emily Bell, director of the Center for Digital Journalism at Columbia University in New York. “It has a built-in user base of celebrities and athletes. If you really want to kick everyone off Twitter, you can have Taylor Swift, Chapel Rowan, [Italian sports journalist] “Fabrizio Romano”

Bell believes that because all of these users are already on Instagram, it may be easier to attract them to Threads than to convince them to start from scratch with an entirely new social network.

But she says this is a shame, and thinks Threads is a terrible product. “To me, Threads is a platform designed to compete with Twitter, and it feels like it was designed by a company that hates everything about Twitter,” she says. “Threads is boring as hell – presentation, participation, everything.”

From my personal experience trying out Threads for this article, it seems like Meta doesn’t see Threads as a huge, exciting new product that they want new users to use. Having around 88,000 followers on X has always made me hesitant to join other social networks, which is why I’ve never had an Instagram account.

To join Threads, I had to join Instagram first, which took about 24-36 hours because I got some weird error messages while signing up. I finally managed to create a Threads account, but after following five accounts I was limited. A few hours later the limit was lifted, I was able to follow three more accounts, and then I was limited again. I quickly gave up.

Those who found it easy to join the site say that once they were on it, it was more comfortable than X, but that’s mainly for the simple reason that it still has moderation staff and doesn’t actively try to attract the far right.

“Threads have a different vibe because they’re almost always participated in by small, self-organized groups,” says misinformation researcher Nina Jankowitz. “They’re usually want Something different than Twitter/X. It definitely helps that they are actively moderating it and that the site’s leadership is not actively promoting conspiracy theories.”

Both potential rivals to X are keen to differentiate themselves from the original. Meta has said it doesn’t want Threads to focus on news and current events like X. Mastodon is perhaps the most consciously “woke” of the alternatives, with very different norms around content warnings and sharing. As such, Bluesky offers the closest experience to the “rebellious” and playful “old Twitter” that many still miss.

Even some of the early successes on Threads are a bit sceptical about its actual value: Stella Creasy, the Labour MP for Walthamstow, has more than 20,000 followers on Threads (166,300 on X), but she confesses that she never actually posts there.

“I just cross-post it to Instagram,” she says, sounding a little guilty. “So I [following] Nothing happens and there is no involvement whatsoever.”

That’s not to say Chrissy has shunned social media: she still posts on X, and is now in a local WhatsApp group with up to 700 members, where her supporters can interact with her directly. While she says she “doesn’t understand” TikTok (“I don’t feel like dancing in public”), she created an account there because “local Asian moms told me that’s where it’s at.”

Chrissie noted that this fragmentation of social media has made her job as a member of Congress more difficult during the recent turmoil: Trying to connect with an audience and provide accurate information is harder on six platforms than it is on one.

Threads’ success may be due to the ease of joining by default: If you use Instagram, it’s the easiest thing to join, and once you’re there, it’s… fine. But if other users seem to be operating on autopilot, they probably are.

“It’s a little bit overloaded here, you’re just in the media and you don’t know what to do,” Creasy says, “and ironically, that’s why I don’t do threads. I know that’s where I get my momentum and that’s where I’m not doing anything.”

Source: www.theguardian.com

“Bots” are now considered negative on social platforms

Analysis of millions of tweets shows the changing meaning of the word “bot”

Svet foto/Shutterstock

Calling someone a bot on social media once meant suspecting they were in fact software, but now the use of the word is evolving into an insult for known human users, researchers say.

Many efforts to detect social media bots use algorithms that attempt to identify behavioral patterns that are more typical of the traditional meaning of a bot: automated accounts controlled by a computer, but their accuracy remains questionable.

“Recent research has focused on detecting social bots, which is a problem in itself because of the ground truthing issues,” he said. Dennis Assenmacher The Leibniz Institute for Social Sciences in Cologne, Germany, said it was unclear whether the findings were accurate.

To investigate, Assenmacher and his colleagues looked at how users perceive bots: They looked at how the word “bot” was used on Twitter between 2007 and December 2022 (the social network was renamed X in 2023 after being acquired by Elon Musk), analyzing the words that appeared next to it in more than 22 million English-language tweets.

The researchers found that before 2017, the term was often used in conjunction with allegations of automated behavior, such as “software,” “scripts,” or “machines,” the kinds of things that traditionally fit the definition of a bot. Since that year, that usage has changed.

“The accusation has now become like an insult, it’s used to dehumanize people, it’s used to denigrate people’s intelligence, it’s used to deny them their right to participate in the conversation,” Assenmacher said.

The cause of this change is unclear, but Assenmacher said it may be political in nature. The researchers looked at the accounts of prominent people, such as politicians and journalists, that each Twitter user followed, and classified users as left- or right-leaning. They found that left-leaning users were more likely to accuse other users of being bots, and that those who were accused were more likely to be right-leaning.

“One possible explanation is that the media has reported that right-wing bot networks [2016] “The US elections,” Assenmacher said, “but this is just speculation and needs to be confirmed.”

topic:

Source: www.newscientist.com

Meta Report Reveals 100,000 Children Experience Daily Sexual Harassment on Online Platforms

According to an internal document released late Wednesday, Meta estimates that about 100,000 children on Facebook and Instagram are subjected to online sexual harassment every day, including “pictures of adult genitalia.” The unsealed legal filings include several allegations against Meta, based on information the New Mexico Attorney General’s Office learned from presentations and communications between Meta employees. These allegations describe an incident in 2020 in which the 12-year-old daughter of an Apple executive was solicited via Instagram’s messaging product, IG Direct.

In testimony before the US Congress late last year, a senior Meta employee described how his daughter was recruited through Instagram. His efforts to resolve the issue were ignored, he said. This suit is the latest in a series of lawsuits filed by the New Mexico Attorney General’s Office on December 5, alleging that Meta’s social network has become a marketplace for child predators. The state’s attorney general, Raul Torrez, accused Meta of allowing adults to find, send messages to, and groom children. Meta released a statement in response to Wednesday’s filing, stating, “We want to provide teens with a safe and age-appropriate online experience, and we have over 30 tools to support them and their parents.”

The lawsuit also referenced a 2021 internal presentation on child safety, in which Meta states that it has “poorly invested in the sexual expression of minors on IG, with significant sexual commentary on content posted by minors.” The complaint also highlights Meta employees’ concerns about the safety of children. Meta’s statement also said the company “has taken significant steps to prevent unwanted contact from teens, especially adults.”

The New Mexico lawsuit follows a Guardian investigation in April that revealed how Meta failed to report or detect the use of its platform for child trafficking. According to documents included in the lawsuit, Meta employees “coordinate human trafficking operations” and ensure that “every step of human exploitation (recruitment, conditioning, and exploitation) is expressed on our platform.” But an internal email from 2017 said executives opposed scanning Facebook Messenger for “harmful content,” citing the service’s desire to “provide more privacy.” In December, Meta received widespread criticism for introducing end-to-end encryption for messages sent via Facebook and Messenger.

Source: www.theguardian.com

Spill Enters Open Beta on iOS and Android Platforms

It’s been more than a year since Elon Musk bought Twitter, but the effects of that deal are still felt on other social platforms, including new ones that have emerged since then. His Spill, a platform founded by a former Twitter employee, concludes his first year on the market by opening a beta version to all users, whether on iOS or Android.

Spill is like the antithesis of X, a platform that continues to alienate users with platform policies that actively reduce the inclusivity of its apps. Spill’s founders realized they were the only two Black people on the workforce, and although they met while working at Twitter, they wanted to build a platform that valued diversity from the beginning. Masu.

“On other platforms, people who promote culture, whether it’s black and brown people, marginalized people, gay people, etc., have had to go to some length to make space,” Spill’s Kenya Parham, vice president of community and partnerships, said in a past conversation with TechCrunch. “We’re starting with them at the forefront, and we think that’s going to create a really healthy ecosystem.”

Image credits: spill

The app is like a combination of Twitter and Tumblr, a microblogging platform for following users and scrolling through feeds, but more multimedia-driven. At his AfroTech last month, Spill announced a “Tea Party” feature that allows users to have live conversations via audio or video. The first tea party was hosted by actress Kerry Washington, where she opened up about her new memoir.

A year after he was fired from Twitter, Spill CEO Alphonzo Terrell told TechCrunch that the app had about 200,000 users. Spill has raised a total of $5 million in pre-seed funding to date, including a recent $2 million extension led by Collide Capital.

Spill may not be growing as quickly as other Twitter competitors like Bluesky, Mastodon, and Threads, but Terrell isn’t worried.

“People are looking for something new,” Terrell told TechCrunch last month. “I think the ones with really clear and unique value propositions will win in the long run. It might not be a one-winner-take-all kind of thing.”

Source: techcrunch.com