Regulated access to social media in Australia
Anna Barclay/Getty Images
In a few months, Australian teenagers may face restrictions on social media access until they turn 16.
As the December implementation date approaches, parents and children are left uncertain about how this ban will be enforced and how online platforms will verify users’ ages.
Experts are anticipating troubling outcomes, particularly since the technology used by social media companies to determine the age of users tends to have significant inaccuracies.
From December 10th, social media giants like Instagram, Facebook, X, Reddit, YouTube, Snapchat, and TikTok are required to remove or deactivate any accounts for users under 16 in Australia. Failing to comply could result in fines reaching up to $49.5 million (around $32 million USD), while parents will not face penalties.
Prior to the announcement of the ban, the Australian government initiated a trial on age verification technology, which released preliminary findings for June, with a comprehensive report expected soon. This study aimed to test an age verification tool on over 1,100 students across the country, including indigenous and ethnically diverse groups.
Andrew Hammond from KJR, the consulting firm based in Canberra that led the trial, shared an anecdote illustrating the challenge at hand. One 16-year-old boy’s age was inaccurately guessed to be between 19 and 37.
“He scrunched up his face and held his breath, turning red and puffy like an angry older man,” he said. “He didn’t do anything wrong; we wanted to see how our youth would navigate these systems.”
Other technologies have also been evaluated with Australian youth, such as hand gesture analysis. “You can estimate someone’s age broadly based on their hand appearance,” Hammond explains. “While some children felt uneasy using facial recognition, they were more comfortable with hand assessments.”
The interim report indicated that age verification could be safe and technically viable; previous headlines noted that while challenges exist, 85% of subjects’ ages could be accurately estimated within an 18-month range. If a person initially verified as being over 16 is later identified as under that age, they must undergo more rigorous verification processes, including checks against government-issued IDs or parental verification.
Hammond noted that some underage users can still be detected through social media algorithms. “If you’re 16 but engage heavily with 11-year-old party content, it raises flags that the social media platform should consider, prompting further ID checks.”
Iain Corby from the London Association of Age Verification Providers, which supported the Australian trial, pointed out that no single solution exists for age verification.
The UK recently mandated age verification on sites hosting “harmful content,” including adult material. Since the regulations went into effect on July 25th, around 5 million users have been verifying their ages daily, according to Corby.
“In the UK, the requirement is for effective but not foolproof age verification,” Corby stated. “There’s a perception that technology will never be perfect, and achieving higher accuracy often requires more cumbersome processes for adults.”
Critics have raised concerns about a significant loophole: children in Australia could use virtual private networks (VPNs) to bypass the ban by simulating locations in other nations.
Corby emphasized that social media platforms should monitor traffic from VPNs and assess user behavior to identify potential Australian minors. “There are many indicators that someone might not be in Thailand, confirming they could be in Perth,” he remarked.
Apart from how age verification will function, is this ban on social media the right approach to safeguarding teenagers from online threats? The Australian government asserted that significant measures have been implemented to protect children under 16 from the dangers associated with social media, such as exposure to inappropriate content and excessive screen time. The government believes that delaying social media access provides children with the opportunity to learn about these risks.
Various organizations and advocates aren’t fully convinced. “Social media has beneficial aspects, including educational opportunities and staying connected with friends. It’s crucial to enhance platform safety rather than impose bans that may discourage youth voices,” stated UNICEF Australia on its website.
Susan McLean, a leading cybersecurity expert in Australia, argues that the government should concentrate on harmful content and the algorithms that promote such material to children, expressing concern that AI and gaming platforms have been exempted from this ban.
“What troubles me is the emphasis on social media platforms, particularly those driven by algorithms,” she noted. “What about young people encountering harmful content on gaming platforms? Have they been overlooked in this policy?”
Lisa Given from RMIT University in Melbourne explained that the ban fails to tackle issues like online harassment and access to inappropriate content. “Parents may have a false sense of security thinking this ban fully protects their children,” she cautioned.
The rapid evolution of technology means that new platforms and tools can pose risks unless the underlying issues surrounding harmful content are addressed, she argued. “Are we caught in a cycle where new technologies arise and prompt another ban or legal adjustment?” Additionally, there are concerns that young users may be cut off from beneficial online communities and vital information.
The impact of the ban will be closely scrutinized post-implementation, with the government planning to evaluate its effects in two years. Results will be monitored by other nations interested in how these policies influence youth mental health.
“Australia is presenting the world with a unique opportunity for a controlled experiment,” stated Corby. “This is a genuine scientific inquiry that is rare to find.”
Topics:
Source: www.newscientist.com
