The importance of online safety for children in the UK is reaching a pivotal moment. Starting this Friday, social media and other internet platforms must take action to safeguard children or face substantial fines for non-compliance.
This marks a critical evaluation of the online safety law, a revolutionary regulation that encompasses platforms like Facebook, Instagram, TikTok, YouTube, Google, and more. Here’s an overview of the new regulations.
What will happen on July 25th?
Companies subject to the law are required to implement safety measures that shield children from harmful content. Specifically, all pornography sites must establish stringent age verification protocols. According to Ofcom, the UK communications regulator, 8% of children aged 8 to 14 accessed online pornographic sites or apps within a month.
Furthermore, social media platforms and major search engines must block access for children to pornography and content that promotes or encourages suicide, self-harm, and eating disorders. This may involve completely removing certain feeds for younger users. Hundreds of businesses will be impacted by these regulations.
Platforms must also minimize the distribution of other potentially harmful content, such as promoting dangerous challenges, substance abuse, or instances of bullying.
What are the suggested safety measures?
Recommended measures include: Algorithms that suggest content to users must exclude harmful materials. All sites and applications must implement procedures to rapidly eliminate dangerous content. Additionally, children should have a straightforward method to report concerns. Compliance is flexible if businesses believe they have effective alternatives to meet their child safety responsibilities.
Services deemed “high risk”, like major social media platforms, must utilize “highly effective” age verification methods to identify users under 18. If a social media platform is found hosting harmful content without age checks, it is responsible for ensuring a “positive” user experience.
X states that if it cannot determine a user’s age as 18 or older, it defaults to sensitive content settings, thereby restricting adult material. They are also integrating age estimation technology and ID verification to ensure users are not underage. Meta, the parent company of Instagram and Facebook, claims to have a comprehensive approach to age verification that includes a teen account feature set by default for users under 18.
“We collaborate with the law firm Payne Hicks Beach,” noted Mark Jones, a partner at the firm. “[Online Safety Act] If not, we strive to clarify it for the company.”
The Molly Rose Foundation, set up by the family of British teenager Molly Russell, who tragically lost her life in 2017 due to harmful online content, is advocating for further changes, including the prohibition of perilous online challenges and requiring platforms to proactively mitigate depressive and body image-related content.
How will age verification be implemented?
Some age verification methods for pornographic providers supported by OFCOM include: assessing a person’s age through live photos and videos (face age estimation), verifying age via credit card, bank, or mobile network operator, matching photo ID, and utilizing a “digital identity wallet” that contains proof of age.
Ria Moody, a lawyer at Linklaters, commented, “Age verification measures must be highly accurate. OFCOM indicates these measures are ineffective unless they ensure the user is over 18, so platforms should not rely solely on them.”
What does this mean in practice?
Pornhub, the UK’s most frequented online porn site, has stated it will implement a “regulatory approved age verification method” by Friday, though specific methods have yet to be disclosed. Another adult site, OnlyFans, is already using facial age verification software, which estimates users’ ages without saving their facial images, relying instead on data from millions of other images. A company called Yoti provides this software and has also made it available on Instagram.
Last week, Reddit began verifying the age of forums and threads containing adult content. The platform utilizes technology from a company named Persona, which verifies age using uploaded selfies or government-issued ID photos. Reddit does not retain the photos, instead storing validation statuses to streamline the process for users.
How accurate is facial age verification?
The software allows websites or apps to set a “challenge” age (e.g., 20 or 25) to minimize the number of underage users accidentally accessing content. When Yoti set a challenge age of 20, less than 1% of 13-17-year-olds were mistakenly verified.
What other methods are available?
Another direct approach entails requiring users to present formal identification, like a passport or driver’s license. Importantly, the ID details need not be stored and can be used solely to verify access.
Will all pornographic sites conduct age checks?
They are expected to, but many smaller sites might try to circumvent the regulations, fearing it will deter demand for their services. Industry representatives suggest that those who disregard the rules may await Ofcom’s response to violations before determining their course of action.
How will child protection measures be enforced?
Ofcom has a broad spectrum of penalties it can impose under the law. Companies can face fines of up to £18 million or 10% of their global revenue for violations—potentially amounting to $16 billion for Meta. Additionally, sites or apps can receive formal warnings. For severe violations, Ofcom may seek a court order to restrict the availability of the site or app in the UK.
Moreover, senior managers at technology firms could face up to two years in prison if they are found criminally liable for repeated breaches of their obligations to protect children and for ignoring enforcement notices from Ofcom.
Source: www.theguardian.com
