Numerous TikTok accounts are accumulating billions of views by sharing anti-immigrant and sexually explicit AI-generated material, as highlighted in a recent report.
Researchers found 354 accounts centered around AI that shared 43,000 posts created with AI tools, resulting in 4.5 billion views in just one month.
As per the Paris-based nonprofit AI Forensics, these accounts are attempting to manipulate TikTok’s algorithm—responsible for deciding what content appears for users—by posting large volumes of content in hopes of achieving viral status.
Some accounts reportedly posted as many as 70 times daily, indicative of automated activity, with most accounts established at the start of the year.
TikTok disclosed last month that it hosted at least 1.3 billion AI-generated posts. With more than 100 million pieces of content uploaded daily, AI-labeled material constitutes a minor fraction of TikTok’s offerings. Users can also adjust settings to minimize exposure to AI content.
Among the most active accounts, around half focused on content related to women’s bodies. The report notes, “These AI representations of women are often depicted in stereotypically attractive forms, which include suggestive clothing and cleavage.”
Research from AI Forensics indicated that nearly half of the content posted by these accounts lacked labels, and under 2% used TikTok’s AI tags. The organization cautioned that this could mislead viewers. They noted that some accounts can evade TikTok’s moderation for months, even while distributing content that violates the platform’s terms.
Several accounts identified in the study have been deleted recently, with signs suggesting that moderators removed them, according to the researchers.
Some of this content resembled fake news broadcast segments. An example is an anti-immigrant story and other materials that sexualize young women’s bodies, potentially including minors. AI Forensics identified that half of the top ten most active accounts were focused on the female body niche, with some of the fake news utilizing familiar news brands including Sky News and ABC.
After a mention by The Guardian, some posts were subsequently taken down by TikTok.
TikTok labeled the report’s assertions as “unfounded,” asserting that the researchers acknowledged the issue as one affecting several platforms. Recently, The Guardian revealed that almost one in ten of the fastest-growing YouTube channels primarily features AI-generated content.
“TikTok is committed to eliminating harmful AIGC [artificial intelligence-generated content], we are blocking the creation of hundreds of millions of bot accounts while investing in top-notch AI labeling technology, and providing users with the tools and education necessary to manage their content experience on our platform,” declared a TikTok spokesperson.
An example of AI “slop” is content that lacks substance and is intended to clutter social media timelines. Photo: TikTok
The most viewed accounts flagged by AI Forensics often shared “slop,” a term used to describe AI-generated content that is trivial, odd, and meant to disturb users’ feeds. This includes postings such as animals in Olympic diving or talking babies. Researchers noted that while some of the risqué content was deemed “funny” and “adorable,” it still contributes to the clutter.
After newsletter promotion
TikTok’s policies forbid the use of AI to create deceptive authoritative sources, portray anyone under 18, or depict adults who aren’t public figures.
“Through this investigation, we illustrate how automated accounts integrate AI content into platforms and the broader virality framework,” the researchers noted.
“The distinction between genuine human-generated content and artificial AI-produced material on platforms is becoming increasingly indistinct, indicating a trend towards greater AI-generated content in users’ feeds.”
The analysis spanned from mid-August to mid-September, uncovering attempts to monetize users via the advertisement of health supplements through fictitious influencers, the promotion of tools for creating viral AI content, or seeking sponsorships for posts.
While AI Forensics acknowledged TikTok’s recent move to allow users to restrict AI content visibility, they emphasized the need for improved labeling.
“We remain cautious about the effectiveness of this feature, given the significant and persistent challenges associated with identifying such content,” they expressed.
The researchers recommended that TikTok explore the option of developing AI-specific features within its app to differentiate AI-generated content from that produced by humans. “Platforms should aim to transcend superficial or arbitrary ‘AI content’ labels and develop robust methods that either distinctly separate generated and human-created content or enforce systematic and clear labeling of AI-generated material,” they concluded.
Source: www.theguardian.com
