Ofcom calls for action following allegations of Roblox being a ‘pedophile hellscape’ by US company

Child safety activists have urged the UK’s communications watchdog to enforce new online laws following accusations that video game companies have turned their platforms into “hellscapes for adult pedophiles.” They are calling for “gradual changes.”

Last week, Roblox, a popular gaming platform with 80 million daily users, came under fire for its lax security controls. An investment firm in the US criticized Roblox, claiming that its games expose children to grooming, pornography, violent content, and abusive language. The company has denied these claims and stated that safety and civility are fundamental to their operations.

The report highlighted concerning issues such as users seeking to groom avatars, trading in child pornography, accessible sex games, violent content, and abusive behavior on Roblox. Despite these concerns, the company insists that millions of users have safe and positive experiences on the platform, and any safety incidents are taken seriously.

Roblox, known for its user-generated content, allows players to create and play their own games with friends. However, child safety campaigners emphasize the need for stricter enforcement of online safety laws to protect young users from harmful content and interactions on platforms like Roblox.

Platforms like Roblox will need to implement measures to protect children from inappropriate content, prevent grooming, and introduce age verification processes to comply with the upcoming legislation. Ofcom, the regulator responsible for enforcing these laws, is expected to have broad enforcement powers to ensure user safety.

In response, a Roblox spokesperson stated that the company is committed to full compliance with the Online Safety Act, engaging in consultations and assessments to align with Ofcom’s guidelines. They look forward to seeing the final code of practice and ensuring a safe online environment for all users.

Source: www.theguardian.com

Facebook Board Announces Rule Allowing Altered Video Depicting Biden as Pedophile

Meta’s oversight board determined that a Facebook video falsely alleging that U.S. President Joe Biden is a pedophile did not violate the company’s current rules, but expressed that the rules were “disjointed”. It was acknowledged that the focus is too narrow on AI-generated content.

The board, which is funded by Facebook’s parent company Meta but operates independently, took on the Biden video case in October after receiving user complaints about a doctored seven-second video of the president.


The board ruled that under current policies, the misleading altered video would only be prohibited if it was created by artificial intelligence or made to appear to say words that were not actually said. Therefore, Meta was correct in continuing to publish the video.

This ruling is the first to criticize Meta’s policies against “manipulated media” amidst concerns about the potential use of new AI technology to influence upcoming elections.

The board stated that the policy “lacks a convincing justification, is disjointed and confusing to users, and does not clearly articulate the harms it seeks to prevent.” It suggested updating the policy to cover both audio and video content, and to apply a label indicating that it has been manipulated, regardless of whether AI is used.

It did not require the policy to apply to photos, as doing so could make enforcement too difficult at Meta’s scale.

Meta, which also owns Instagram and WhatsApp, informed the board that it plans to update its policies to address new and increasingly realistic advances in AI, according to the ruling.

The video on Facebook is a manipulated version of real footage of Biden exchanging “I voted” stickers with his granddaughter and kissing her on the cheek during the 2022 US midterm elections.

The board noted that non-AI modified content is “more prevalent and not necessarily less misleading” than content generated by AI tools.

It recommended that enforcement should involve applying labels to content, rather than Meta’s current approach of removing posts from the platform.

The company announced that it is reviewing the ruling and will respond publicly within 60 days.

Source: www.theguardian.com