
The Verge · Feb 23, 2026 · Collected from RSS
Progress towards reliable deepfake labelling tech is sluggish, despite all the “help” from AI providers. | Image: Cath Virginia / The Verge, Getty Images As 2025 drew to a close, Instagram head Adam Mosseri ended the year by doom-posting about AI. "Authenticity is becoming infinitely reproducible," Mosseri lamented. "Everything that made creators matter - the ability to be real, to connect, to have a voice that couldn't be faked - is now accessible to anyone with the right tools." But people, Mosseri insisted, still wanted "content that feels real." His proposed solution was finding a way to label real media. "Camera manufacturers will cryptographically sign images at capture, creating a chain of custody," he said. The result would be a trustworthy system for determining what's not AI. The g … Read the full story at The Verge.
As 2025 drew to a close, Instagram head Adam Mosseri ended the year by doom-posting about AI. “Authenticity is becoming infinitely reproducible,” Mosseri lamented. “Everything that made creators matter — the ability to be real, to connect, to have a voice that couldn’t be faked — is now accessible to anyone with the right tools.” But people, Mosseri insisted, still wanted “content that feels real.” His proposed solution was finding a way to label real media. “Camera manufacturers will cryptographically sign images at capture, creating a chain of custody,” he said. The result would be a trustworthy system for determining what’s not AI.The good news is that Mosseri’s solution already exists: it’s called C2PA. The bad news is that Instagram is already using it, and it’s not doing shit to actually help. If anything, it’s starting to feel like a substitute for actual action, as Instagram goes full speed ahead on building generative AI tools.AI is getting extremely good at mimicking reality, which threatens the culture and business models that many social media platforms have fostered around content creators. AI can copy dance trends and photo shoots, make artists and influencers who don’t exist, and generally replicate any of the same-y looking content that social media is already overrun with. Creators are fighting against this by leaning into aesthetics that look raw and imperfect, but AI is pretty good at that too. More concerningly, it can also be used to quickly spread misinformation about important events like the ICE protests in Minnesota, or the killing of Renee Nicole Good and Alex Pretti.Over the past several years, some of the biggest names in tech have nominally fought this by adopting a system called Content Credentials or C2PA. C2PA — short for Coalition for Content Provenance and Authenticity — is a provenance-based standard founded in 2021 by Adobe, Intel, Microsoft, ARM, Truepic, and the BBC. As Mosseri suggested, C2PA addresses deepfakes not by directly labeling fake material, but by authenticating media that’s not AI-generated. It does this by attaching invisible metadata to images, videos, and audio at the point of creation or editing, allowing us to verify who made something, how and when it was made, and if AI has been used during that process. Meta joined the C2PA Steering Committee in September 2024 to support and promote the standard, noting that having the ability to understand digital content is “critical to maintaining the health of the digital ecosystem.”While C2PA has the backing of Microsoft, Meta, Google, OpenAI, TikTok, Qualcomm, and many other large tech companies, it’s just one system that’s trying to establish real from fake. And while the system has its place, it clearly isn’t being implemented in a way that’s actually helping to protect people from AI slop or misleading deepfakes. Even if more synthetic content is embedded with C2PA information, everyday people are still largely expected to manually hunt for it themselves across the images and videos they see online, despite many not even being aware that C2PA exists. If anything, it seems like AI providers are using C2PA to distance themselves from the problem, while continuing work on their own slop factories.Companies have thrown their weight behind C2PA and other provenance-based solutions like Google’s SynthID watermarking system. (There are also inference-based solutions available that scan for subtle signs of synthetic generation — like Reality Defender, which is also a member of the C2PA initiative — but those can only rank the likelihood that AI was used.) But provenance-based solutions have pitfalls. For one thing, absolutely everyone involved with every stage of media creation and hosting needs to be on board, which is laughably unachievable. C2PA, for instance, has been only gradually adopted by camera companies like Canon, Nikon, Sony, FujiFilm, and Leica, with support slow and mostly limited to new camera releases.“Older cameras that do not support C2PA will continue to produce important and valid photographs,” Leica Camera USA spokesperson Nathan Kellum-Pathe told The Verge. “ For these images, trust will still rely on context, reputation, and editorial responsibility.”Provenance metadata is also so flimsy that OpenAI — a steering member of C2PA — points out it can “easily be removed either accidentally or intentionally.” LinkedIn and TikTok still fail to reliably tag content that’s supposed to carry C2PA metadata. YouTube uses C2PA, Google’s SynthID, and other systems for proactive AI labeling, but those labels are also inconsistent and difficult to spot. And nobody even knows what a photo is these days, so boiling down what actually counts as real or fake is far easier said than done. Meta learned this the hard way by slapping real photographs on Instagram with “Made by AI” labels, pissing off a lot of photographers.Meta has long since renamed these labels as “AI info” and made them far harder to spot. You should find this label in teeny text below someone’s account name when looking at AI-generated or manipulated content on the Instagram app, but it can intermittently be replaced with song names and other information about the post. If you spot it, you still need to open the three-dot menu on images and videos to actually read the AI info label. These AI labels also may not appear at all on Instagram’s desktop website, even on posts that feature the “AI Info” label on the platform’s mobile apps. If there are no labels or visual indicators of C2PA at all, you’re expected to scan suspicious content using a Chrome browser extension or by manually uploading it to one of the official C2PAchecker websites.I’ve already criticized C2PA’s capabilities as an AI labelling solution at great length. Adoption of the standard is slowly expanding, and a system that works some of the time is better than having no system at all. But it was never designed to solve deepfake detection or AI slop on a universal scale. Andy Parsons, senior director of Content Authenticity at Adobe, said that while it’s “certainly true” that AI is causing harmful problems, it’s incorrect to assume that C2PA solves all of them.“This is not a silver bullet,” Parsons told The Verge. “It does solve a whole class of problems.”X’s glaring absence from C2PA also demonstrates why the standard won’t solve our current issues regarding AI and authenticity. Despite Twitter being a founder of C2PA, it withdrew from the initiative after Musk purchased and renamed it to X. Parsons said he can verify that X is not currently involved with C2PA, and that we would “embrace X participating actively.” It’s a huge online space that enables news to spread quickly, and many brands and notable figures favor the platform for sharing announcements with their fans. But between the constant controversies of Grok generating violent and sexualized materials of men, women, and children, and Musk sharing misleading deepfakes, X clearly has no interest in protecting its 270 million daily users from AI fakery or misinformation. That means a lot of people are using X as a major news source — and sometimes spreading that news to other platforms — despite having little to no assurance that what they’re seeing is real.Reality Defender CEO Ben Colman also notes that we wouldn’t see AI slop and deepfakes going unlabeled and spreading like wildfire if C2PA alone were a viable solution, and that leaning entirely on labelling or watermarking solutions assumes that malicious AI content is only made with a few specific tools. “Which is the absolute wrong assumption, mind you, but that’s what we’ve got powering moderation for the world’s biggest social platforms at the moment,” Colman told The Verge.Even an effective labeling system might not solve the problem. One recent study found that transparency warnings seem insufficient to prevent harm from AI-generated deepfakes, and noted that there is “little empirical evidence to support the effectiveness of AI transparency.”Still, that hasn’t stopped everyone from parroting variations of the same message we’ve been hearing for years: that standards like C2PA are an important step in developing authenticity and deepfake detection systems and are a work in progress. Parsons said that he understands “potential frustration that there could be more and faster” and that the ability to see evidence of C2PA across online platforms “is coming,” even if it’s coming “more slowly than any of us would like.”You would think that, if AI providers like Meta and Google were truly dedicated to protecting people against being deceived or misled, those companies would stop pumping out tools that massively contribute to those problems until there’s a solution — if one can actually be found. Mosseri’s concerns about the importance of preserving reality fall flat when Meta is actively pushing an Instagram alternative that’s entirely AI slop. OpenAI also launched a TikTok clone made up of AI-generated videos that violated copyright laws and imitated real people without permission. YouTube has loudly pledged to combat rising levels of slop content on the platform, while encouraging creators to use Google’s AI models during video production.AI providers steering C2PA are trying to have their cake and eat itAll of this shows that the AI providers steering C2PA are trying to have their cake and eat it too, seemingly absconding from responsibility to control their misinformation machines while said machines are making them money.OpenAI makes most of its revenue from charging ChatGPT and Sora users subscriptions to unlock higher image and video generation limits. AI slop is so pervasive on YouTube that it made up 10 percent of the platform’s fastest-growing channels in July 2024, despite introducing policies to curb “inauthentic content.” Meta is preparing to lock some AI capabilities behind premium subscriptions for Instagram, Facebook, and WhatsApp, and CEO Mark Zuckerberg is