TikTok is taking another stab at curbing the rise of AI-generated misinformation taking over social media platforms, as it announces a new way its watermarking (or labelling) AI.
The platform announced today that it is joining the Adobe-led Content Authenticity Initiative and the Coalition for Content Provenance and Authenticity (C2PA), a nonprofit-backed project to align tech and social media companies on best practices for what is known as “content provenance,” or the “basic, trustworthy facts about the origins of a piece of digital content.”
While TikTok already labels AI-generated content (AIGC) made using its own AI effects, and requires users to label their own uploaded AI content, the new policy will apply automatic oversight to content made offsite. TikTok’s stated intention with the collaboration is for an auto-labeling system to read Content Credentials in order to parse an image or video’s metadata and quickly identify it as AI-generated.
The platform also said it would begin attaching Content Credentials to TikTok content itself, so that others can learn when, where, and how the content was made or edited. It’s now the first social media company and video platform to sign on to Adobe’s Content Credentials standards, Fortune reports.
Mashable Light Speed
“While experts widely recommend AIGC labeling as a way to support responsible content creation, they also caution that labels can cause confusion if viewers don’t have context about what they mean,” the company wrote in its announcement. “That’s why we’ve been working with experts to develop media literacy campaigns that can help our community identify and think critically about AIGC and misinformation.”
TikTok will also release 12 new media literacy resources created alongside the Poynter Institute’s Mediawise, a youth-focused fact checking project, and WITNESS, a human rights organization that teaches civilians how to use tech to record and protect themselves. WITNESS also provides guidelines and advice for discerning deepfakes and other AI threats.
Other tech platforms, similarly flooded with the products of generative AI, have taken their own paths toward better labeling and watermarking. Meta announced new AI content labels in April. Not long after, Snapchat announced it was adding an automatic, but user-visible, watermark to all content created using its in-house AI tools. The transparent Snapchat logo watermark is added to images once they are downloaded to a device or exported off the platform.
TikTok, meanwhile, launches these intended AI protections within a flurry of attention aimed at the platform’s role in global organizing and its potential impact on the upcoming presidential election. The new policies also come in the midst of the company’s last ditch effort to save the app’s place on U.S. phones, including a recent lawsuit against the U.S. government for its Biden-approved ban.