Social media platforms are required to take action to comply with UK online safety laws, but they have not yet implemented all the necessary measures to protect children and adults from harmful content, according to the regulator.
Ofcom has issued a code of conduct and guidance for tech companies to adhere to in order to comply with the law, which includes the possibility of hefty fines and site closures for non-compliance.
Regulators have pointed out that many of the recommended actions have not been taken by the largest and most high-risk platforms.
John Higham, Director of Online Safety Policy at Ofcom, stated, “We believe that no company has fully implemented all necessary measures. There is still a lot of work to be done.”
All websites and apps covered by the law, including Facebook, Google, Reddit, and OnlyFans, have three months to assess the risk of illegal content appearing on their platforms. Safety measures must then be implemented to address these risks starting on March 17, with Ofcom monitoring progress.
The law applies to sites and apps that allow user-generated content, as well as large search engines covering over 100,000 online services. It lists 130 “priority crimes,” including child sexual abuse, terrorism, and fraud, which tech companies need to address by implementing moderation systems.
The new regulations and guidelines are considered the most significant changes to online safety policy in history according to Technology Secretary Peter Kyle. Tech companies will now be required to proactively remove illegal content, with the risk of heavy fines and potential site blocking in the UK for non-compliance.
Ofcom’s code and guidance include designating a senior executive responsible for compliance, maintaining a well-staffed moderation team to swiftly remove illegal content, and improving algorithms to prevent the spread of harmful material.
Platforms are also expected to provide easy-to-find tools for reporting content, with a confirmation of receipt and timeline for addressing complaints. They should offer users the ability to block accounts, disable comments, and implement automated systems to detect child sexual abuse material.
Child safety campaigners have expressed concerns that the measures outlined by Ofcom do not go far enough, particularly in addressing suicide-related content and making it technically impossible to remove illegal content on platforms like WhatsApp.
In addition to addressing fraud on social media, platforms will need to establish reporting channels for instances of fraud with law enforcement agencies. They will also work on developing crisis response procedures for events like the summer riots following the Southport murders.
Source: www.theguardian.com