TikTok is pioneering the flagging of users who upload artificial intelligence generated content (AIGC) from other platforms to their video-sharing site. This content will be automatically labeled for users to see on the platform.
Content generated using TikTok’s own AI tools is already marked as such for viewers, and creators now have the option to manually add the same label to their content. Previously, it was possible to bypass these rules and disguise generated materials, but uploading from other platforms ensures authenticity.
Moving forward, TikTok will identify and label AIGC using digital watermarks created by the Coalition for Content Provenance and Authenticity (C2PA).
According to Adam Presser, head of Trust and Safety at TikTok Operations, AI offers creative opportunities but can also be confusing if viewers are unaware that the content is AI-generated. Labeling helps provide context and transparency.
To ensure transparency, TikTok will also apply content credentials technology to content downloaded from its platform, allowing other platforms to verify the origin of the content.
However, the ability to label generated content is currently limited to platforms that are members of C2PA, which includes major companies like Microsoft, Google, and Adobe.
While OpenAI recently joined C2PA, smaller AI groups may still produce unlabeled content. Open source tools like Stable Diffusion can be used to create and adjust content without labeling. Despite this, TikTok’s labeling efforts follow Meta’s similar announcement in February.
Social media platforms like Snapchat are also making efforts to label AI-generated content, with varying degrees of success. However, the proliferation of AI-generated images continues, with some calling the phenomenon the “zombie internet.”
Source: www.theguardian.com