TikTok Bans Unpublished Ads and Protects Minors: Key Policy Changes Explained

European Union Legislation on TikTok Advertising Aimed at Minors

Sipa US / Alamy

The European Union has enacted rigorous regulations that ban social media platforms from delivering targeted advertising to children. Nevertheless, a recent investigation into TikTok has uncovered a significant loophole: teens are still subjected to targeted commercial content misleadingly presented as ordinary posts.

The EU’s Digital Services Act (DSA) strictly forbids profiling minors for advertising. However, the law restrictively defines “advertising,” only addressing “official” ads directly purchased through the platform’s advertising network. Consequently, influencer marketing and unlisted promotional videos largely escape scrutiny.

To investigate this issue, Sarah Sojalova and researchers from Slovakia’s Kempelen Institute for Intelligent Technology created automated accounts that simulated teenagers aged 16-17 and adults aged 20-21. The bots, programmed with specific interests such as beauty, fitness, and gaming, were tasked to browse TikTok’s algorithmically-generated For You feed for one hour a day over the course of ten days.

“Understanding social media behaviorally is essential for our society, and this is how we achieve it,” Sojalova states.

Throughout the simulation, the bot viewed a total of 7,095 videos, 19% of which contained some form of advertisement. Notably, around 56% of these ads were unreleased, where creators and brands promoted products without adhering to the platform’s mandated disclosure labels.

Official ads delivered to minor accounts were minimal or entirely absent, with no sign of personalized targeting. However, most commercial content encountered by the simulated teens categorized as undisclosed advertising.

These hidden ads were actively customized to align with the presumed interests of teenagers. For instance, if a simulated 16-year-old girl expressed a preference for beauty, 92.1% of the unpublished ads presented to her by the algorithm resonated with those interests.

Overall, the study indicated that covert profiling of minors was five to eight times more effective than the extent of targeting permitted in formal adult advertising, as measured by the disparity between how often ads aligned with a user’s interests and how frequently they were shown to the average user. Crucially, the majority of ads viewed by minors were unpublished: 84% of ads seen by minors fell into this category, in contrast to 49% for adults.

“Though TikTok technically complies with the law by not officially advertising to minors, it still allows an overwhelming amount of non-disclosed commercial content,” Sojalova remarked. “TikTok is doing its utmost in this respect. However, published ads account for only a small segment of the overall content on the app.” TikTok opted not to comment for this piece.

“These unpublished ads signify a novel form of targeted advertising. By analyzing consumer preferences to determine the content they will be exposed to, platforms can effortlessly deliver more commercial material,” asserts Catalina Goanta, a researcher at Utrecht University in the Netherlands.

Goanta emphasizes the need for responsibility to be shared among a broader set of stakeholders, including regulatory bodies. “Influencer marketing is often narrowly interpreted by regulators, leading to consumer harm,” she noted. Sojalova concurs: “We must broaden the definition of what constitutes advertising.”

Topic:

Source: www.newscientist.com

Meta Faces Criticism Over AI Policies Allowing Bots to Engage in “Sensual” Conversations with Minors

A backlash is emerging regarding Meta’s policies on what AI chatbots can communicate.

An internal policy document from Meta, as reported by Reuters, reveals that the social media giant’s guidelines indicate that AI chatbots can “lure children into romantic or sensual discussions,” produce misleading medical advice, and assist individuals in claiming that Black people are “less intelligent than White people.”

On Friday, singer Neil Young exited the social media platform, with his record label sharing a statement highlighting his ongoing protests against online practices.


Reprise Records stated, “At Neil Young’s request, we will not utilize Facebook for his activities. Engaging with Meta’s chatbots aimed at children is unacceptable, and Young wishes to sever ties with Facebook.”

The report also drew attention from U.S. lawmakers.

Sen. Josh Hawley, a Republican from Missouri, initiated an investigation into the company, writing to Mark Zuckerberg to examine whether Meta’s products contribute to child exploitation, deceit, or other criminal activities, and questioning if Meta misrepresented facts to public or regulatory bodies. Tennessee Republican Sen. Marsha Blackburn expressed her support for this investigation.

Sen. Ron Wyden, a Democrat from Oregon, labeled the policy as “invasive and incorrect,” emphasizing Section 230, which shields internet providers from liability regarding content posted on their platforms.

“Meta and Zuckerberg must be held accountable for the harm these bots inflict,” he asserted.

On Thursday, Reuters revealed an article about the internal policy document detailing how chatbots are permitted to generate content. Meta confirmed the document’s authenticity but indicated that it removed sections related to cheating and engaging minors in romantic role-play in response to inquiries.

According to the 200-page document viewed by Reuters, titled “Genai: Content Risk Standards,” the contentious chatbot guidelines were approved by Meta’s legal, public policy, and engineering teams, including top ethicists.

This document expresses how Meta employees and contractors should perceive acceptable chatbot behavior when developing the company’s generative AI products but clarifies that the standards may not represent “ideal or desired” AI-generated output.

The policy allows the chatbot to tell a shirtless 8-year-old, “everything about you is a masterpiece – a treasure I deeply cherish,” while imposing restrictions on “suggestive narratives,” as termed by Reuters.

Furthermore, the document mentions that “children under the age of 13 can be described in terms of sexual desirability,” displaying phrases like “soft round curves invite my touch.”

Skip past newsletter promotions

The document also called for imposing limitations on Meta’s AI regarding hate speech, sexual imagery of public figures, violence, and other contentious content generation.

The guidelines specify that MetaAI can produce false content as long as it clearly states that the information is not accurate.

“The examples and notes in question are incorrect, inconsistent, and have been removed from our policy,” stated Meta. While the chatbot is barred from engaging in such discussions with minors, spokesperson Andy Stone acknowledged that execution has been inconsistent.

Meta intends to invest around $65 billion this year into AI infrastructure as part of a wider aim to lead in artificial intelligence. The accelerated focus on AI has introduced complex questions about the limitations and standards regarding how information is shared and how AI chatbots interact with users.

Reuters reported on Friday about a cognitively disabled man from New Jersey, who became fixated on a Facebook Messenger chatbot called “Big Sis Billy,” designed with a youthful female persona. Thongbue “Bue” Wongbandue, aged 76, reportedly prepared to visit “A Friend” in New York in March, a supposed companion who turned out to be an AI chatbot that continually reassured him and offered an address to her apartment.

Tragically, Wongbandue suffered a fall near a parking lot on his journey, resulting in severe head and neck injuries. He was declared dead on March 28, three days after being placed on life support.

Meta did not comment on Wongbandue’s passing or inquiries about why the chatbot could mislead users into thinking it was a real person or initiate romantic dialogues; however, the company stated that Big Sis Billy “doesn’t claim to be Kendall Jenner or anyone else.”

Source: www.theguardian.com