Teenager language may make online bullying difficult to detect
Vitapix/Getty Images
The terminology of Generation Alpha is evolving faster than educators, parents, and AI can keep up with.
Manisha Meta, a 14-year-old student from Warren E Hyde Middle School in Cupertino, California, alongside Fausto Giunchiglia from the University of Trent in Italy, examined 100 expressions popular among Generation Alpha, those born from 2010 to 2025, sourced from gaming, social media, and video platforms.
The researchers then asked 24 classmates of Mehta, aged between 11 and 14, to evaluate these phrases along with contextual screenshots. The volunteers assessed their understanding of the phrases, the contexts in which they were used, and if they carried potential safety risks or harmful interpretations. They also consulted their parents, professional moderators, and four AI models (GPT-4, Claude, Gemini, and Llama 3) for the same analysis.
“I’ve always been intrigued by Generation Alpha’s language because it’s so distinctive; relevance shifts rapidly, and trends become outdated just as quickly,” says Mehta.
Among the Alpha generation volunteers, 98% grasped the basic meaning of a given phrase, 96% understood the context of its use, and 92% recognized instances of harmful intent. In contrast, the AI model could identify harmful usage only around 40% of the time, with Claude stumbling from 32.5% to 42.3%. Parents and moderators also fell short, detecting harmful usages in just one-third of instances.
“We expected a broader comprehension than we observed,” Mehta reflects. “Much of the feedback from my parents was speculative.”
Common phrases from Generation Alpha often have double meanings based on context. For instance, “Let’s Cook His” can signify genuine praise in gaming but may also mockingly refer to someone rambling incoherently. “Kys,” once short for “know yourself,” has now been repurposed to mean “kill yourself.” Another phrase that could hide malicious intent is, “Is it acoustic?”
“Generation Alpha is exceedingly vulnerable online,” says Meta. “As AI increasingly dominates content moderation, understanding the language used by LLMs is crucial.”
“It’s evident that LLMs are transforming the landscape,” asserts Giunchiglia. “This presents fundamental questions that need addressing.”
The results were published this week at the Computing Machinery Conference Association on Equity, Accountability and Transparency in Athens, Greece.
“Empirical evidence from this research highlights significant shortcomings in content moderation systems, especially concerning the analysis and protection of young individuals,” notes Michael Veal from University College London. “Companies and regulators must heed this and adapt as regulations evolve in jurisdictions where platform laws are designed to safeguard the youth.”
topic:
Source: www.newscientist.com












