NewsWorld
PredictionsDigestsScorecardTimelinesArticles
NewsWorld
HomePredictionsDigestsScorecardTimelinesArticlesWorldTechnologyPoliticsBusiness
AI-powered predictive news aggregation© 2026 NewsWorld. All rights reserved.
Trending
CrisisStrikesIranInfrastructureLimitedNuclearFebruaryTrumpMilitaryReachedNewsDigestTimelineEmergencyAttacksTrump'sDaysAnnounceDailyProtestsGreenlandChallengeEuropeanLongevity
CrisisStrikesIranInfrastructureLimitedNuclearFebruaryTrumpMilitaryReachedNewsDigestTimelineEmergencyAttacksTrump'sDaysAnnounceDailyProtestsGreenlandChallengeEuropeanLongevity
All Articles
If Big Tech cared about fighting AI slop, it wouldn’t be drowning us in it
The Verge
Published about 2 hours ago

If Big Tech cared about fighting AI slop, it wouldn’t be drowning us in it

The Verge · Feb 23, 2026 · Collected from RSS

Summary

Progress towards reliable deepfake labelling tech is sluggish, despite all the “help” from AI providers. | Image: Cath Virginia / The Verge, Getty Images As 2025 drew to a close, Instagram head Adam Mosseri ended the year by doom-posting about AI. "Authenticity is becoming infinitely reproducible," Mosseri lamented. "Everything that made creators matter - the ability to be real, to connect, to have a voice that couldn't be faked - is now accessible to anyone with the right tools." But people, Mosseri insisted, still wanted "content that feels real." His proposed solution was finding a way to label real media. "Camera manufacturers will cryptographically sign images at capture, creating a chain of custody," he said. The result would be a trustworthy system for determining what's not AI. The g … Read the full story at The Verge.

Full Article

As 2025 drew to a close, Instagram head Adam Mosseri ended the year by doom-posting about AI. “Authenticity is becoming infinitely reproducible,” Mosseri lamented. “Everything that made creators matter — the ability to be real, to connect, to have a voice that couldn’t be faked — is now accessible to anyone with the right tools.” But people, Mosseri insisted, still wanted “content that feels real.” His proposed solution was finding a way to label real media. “Camera manufacturers will cryptographically sign images at capture, creating a chain of custody,” he said. The result would be a trustworthy system for determining what’s not AI.The good news is that Mosseri’s solution already exists: it’s called C2PA. The bad news is that Instagram is already using it, and it’s not doing shit to actually help. If anything, it’s starting to feel like a substitute for actual action, as Instagram goes full speed ahead on building generative AI tools.AI is getting extremely good at mimicking reality, which threatens the culture and business models that many social media platforms have fostered around content creators. AI can copy dance trends and photo shoots, make artists and influencers who don’t exist, and generally replicate any of the same-y looking content that social media is already overrun with. Creators are fighting against this by leaning into aesthetics that look raw and imperfect, but AI is pretty good at that too. More concerningly, it can also be used to quickly spread misinformation about important events like the ICE protests in Minnesota, or the killing of Renee Nicole Good and Alex Pretti.Over the past several years, some of the biggest names in tech have nominally fought this by adopting a system called Content Credentials or C2PA. C2PA — short for Coalition for Content Provenance and Authenticity — is a provenance-based standard founded in 2021 by Adobe, Intel, Microsoft, ARM, Truepic, and the BBC. As Mosseri suggested, C2PA addresses deepfakes not by directly labeling fake material, but by authenticating media that’s not AI-generated. It does this by attaching invisible metadata to images, videos, and audio at the point of creation or editing, allowing us to verify who made something, how and when it was made, and if AI has been used during that process. Meta joined the C2PA Steering Committee in September 2024 to support and promote the standard, noting that having the ability to understand digital content is “critical to maintaining the health of the digital ecosystem.”While C2PA has the backing of Microsoft, Meta, Google, OpenAI, TikTok, Qualcomm, and many other large tech companies, it’s just one system that’s trying to establish real from fake. And while the system has its place, it clearly isn’t being implemented in a way that’s actually helping to protect people from AI slop or misleading deepfakes. Even if more synthetic content is embedded with C2PA information, everyday people are still largely expected to manually hunt for it themselves across the images and videos they see online, despite many not even being aware that C2PA exists. If anything, it seems like AI providers are using C2PA to distance themselves from the problem, while continuing work on their own slop factories.Companies have thrown their weight behind C2PA and other provenance-based solutions like Google’s SynthID watermarking system. (There are also inference-based solutions available that scan for subtle signs of synthetic generation — like Reality Defender, which is also a member of the C2PA initiative — but those can only rank the likelihood that AI was used.) But provenance-based solutions have pitfalls. For one thing, absolutely everyone involved with every stage of media creation and hosting needs to be on board, which is laughably unachievable. C2PA, for instance, has been only gradually adopted by camera companies like Canon, Nikon, Sony, FujiFilm, and Leica, with support slow and mostly limited to new camera releases.“Older cameras that do not support C2PA will continue to produce important and valid photographs,” Leica Camera USA spokesperson Nathan Kellum-Pathe told The Verge. “ For these images, trust will still rely on context, reputation, and editorial responsibility.”Provenance metadata is also so flimsy that OpenAI — a steering member of C2PA — points out it can “easily be removed either accidentally or intentionally.” LinkedIn and TikTok still fail to reliably tag content that’s supposed to carry C2PA metadata. YouTube uses C2PA, Google’s SynthID, and other systems for proactive AI labeling, but those labels are also inconsistent and difficult to spot. And nobody even knows what a photo is these days, so boiling down what actually counts as real or fake is far easier said than done. Meta learned this the hard way by slapping real photographs on Instagram with “Made by AI” labels, pissing off a lot of photographers.Meta has long since renamed these labels as “AI info” and made them far harder to spot. You should find this label in teeny text below someone’s account name when looking at AI-generated or manipulated content on the Instagram app, but it can intermittently be replaced with song names and other information about the post. If you spot it, you still need to open the three-dot menu on images and videos to actually read the AI info label. These AI labels also may not appear at all on Instagram’s desktop website, even on posts that feature the “AI Info” label on the platform’s mobile apps. If there are no labels or visual indicators of C2PA at all, you’re expected to scan suspicious content using a Chrome browser extension or by manually uploading it to one of the official C2PAchecker websites.I’ve already criticized C2PA’s capabilities as an AI labelling solution at great length. Adoption of the standard is slowly expanding, and a system that works some of the time is better than having no system at all. But it was never designed to solve deepfake detection or AI slop on a universal scale. Andy Parsons, senior director of Content Authenticity at Adobe, said that while it’s “certainly true” that AI is causing harmful problems, it’s incorrect to assume that C2PA solves all of them.“This is not a silver bullet,” Parsons told The Verge. “It does solve a whole class of problems.”X’s glaring absence from C2PA also demonstrates why the standard won’t solve our current issues regarding AI and authenticity. Despite Twitter being a founder of C2PA, it withdrew from the initiative after Musk purchased and renamed it to X. Parsons said he can verify that X is not currently involved with C2PA, and that we would “embrace X participating actively.” It’s a huge online space that enables news to spread quickly, and many brands and notable figures favor the platform for sharing announcements with their fans. But between the constant controversies of Grok generating violent and sexualized materials of men, women, and children, and Musk sharing misleading deepfakes, X clearly has no interest in protecting its 270 million daily users from AI fakery or misinformation. That means a lot of people are using X as a major news source — and sometimes spreading that news to other platforms — despite having little to no assurance that what they’re seeing is real.Reality Defender CEO Ben Colman also notes that we wouldn’t see AI slop and deepfakes going unlabeled and spreading like wildfire if C2PA alone were a viable solution, and that leaning entirely on labelling or watermarking solutions assumes that malicious AI content is only made with a few specific tools. “Which is the absolute wrong assumption, mind you, but that’s what we’ve got powering moderation for the world’s biggest social platforms at the moment,” Colman told The Verge.Even an effective labeling system might not solve the problem. One recent study found that transparency warnings seem insufficient to prevent harm from AI-generated deepfakes, and noted that there is “little empirical evidence to support the effectiveness of AI transparency.”Still, that hasn’t stopped everyone from parroting variations of the same message we’ve been hearing for years: that standards like C2PA are an important step in developing authenticity and deepfake detection systems and are a work in progress. Parsons said that he understands “potential frustration that there could be more and faster” and that the ability to see evidence of C2PA across online platforms “is coming,” even if it’s coming “more slowly than any of us would like.”You would think that, if AI providers like Meta and Google were truly dedicated to protecting people against being deceived or misled, those companies would stop pumping out tools that massively contribute to those problems until there’s a solution — if one can actually be found. Mosseri’s concerns about the importance of preserving reality fall flat when Meta is actively pushing an Instagram alternative that’s entirely AI slop. OpenAI also launched a TikTok clone made up of AI-generated videos that violated copyright laws and imitated real people without permission. YouTube has loudly pledged to combat rising levels of slop content on the platform, while encouraging creators to use Google’s AI models during video production.AI providers steering C2PA are trying to have their cake and eat itAll of this shows that the AI providers steering C2PA are trying to have their cake and eat it too, seemingly absconding from responsibility to control their misinformation machines while said machines are making them money.OpenAI makes most of its revenue from charging ChatGPT and Sora users subscriptions to unlock higher image and video generation limits. AI slop is so pervasive on YouTube that it made up 10 percent of the platform’s fastest-growing channels in July 2024, despite introducing policies to curb “inauthentic content.” Meta is preparing to lock some AI capabilities behind premium subscriptions for Instagram, Facebook, and WhatsApp, and CEO Mark Zuckerberg is


Share this story

Read Original at The Verge

Related Articles

The Vergeabout 2 hours ago
Yep, it’s fast: Donut Lab’s solid-state battery gets its first test result

Since announcing earlier this year that it was on the cusp of a major battery breakthrough, Finnish startup Donut Lab has faced a lot of questions, and plenty of skepticism, about its production-ready, solid-state battery. Could the company really make a fast-charging battery at scale while avoiding some of the theoretical production headaches that have stymied past efforts? Today, Donut Lab sought to dispel some of the doubts with the release of the first independent test of its battery, evaluating its charging speed and the "thermal behavior" of its pack. The test, which was conducted by state-owned VTT Technical Research Centre of Finlan … Read the full story at The Verge.

The Vergeabout 2 hours ago
AOC’s 27-inch 1440p QD-OLED gaming monitor is down to $380

It’s tough not to gush about a 27-inch 1440p QD-OLED gaming monitor that costs under $400 (I’ve done it before!). AOC’s G-Sync-compatible model with a 240Hz refresh rate and a near-instant response time is down to $379.99 at Best Buy, which matches the lowest price I’ve ever seen for a model with these specs. This seems like a great entry-level OLED for your gaming desktop or laptop setup; it has a similar 111 pixels per inch (PPI) as its competitors, it has a three-year warranty that protects against burn-in from normal use (when you use its panel protection settings), and its 16:9 aspect ratio makes it ideal for PC and console gaming. AOC 27-inch QD-OLED gaming monitor (Q27G41ZDF) Where to Buy: $549.99 $379.99 at Best Buy The benefits of QD-OLED over IPS and TN panels commonly used in monitors are immediately obvious when you compare them side-by-side. QD-OLED offers deeper blacks (no more black appearing as hues of gray) and better contrast with more color and brightness accuracy. Games and movies will look better than ever. Google Docs? Not so much. Brightness and text clarity are areas where this tech falls behind; viewing a huge, white Google Doc on this and other OLEDs will make it appear somewhat dim. And, while newer OLED monitors boast clearer text thanks to vertical RGB stripes in their panels, you might notice some fringing around letters with this monitor (and many others on the market) if you look closely. Other Verge-approved deals Sony has discounted a fleet of PS5 games and accessories on its PS Direct site, and you’ll also find many of them on Amazon, Best Buy, Walmart, and GameStop. The most notable game discount that exists only on Sony’s site (for now, at least) is on Ghost of Yōtei, one of the PS5’s best exclusives from 2025. Previously $69.99, you can grab it on disc for $49.99 through March 10th. The third-person action game is set 300 years after the events of Ghost of Tsushima, and you control vengeance-seeking Atsu. As predicted, the

The Vergeabout 3 hours ago
Hank Green will gladly take billionaire money for education videos

Today, I’m talking with Hank Green, a longtime friend of Decoder and the cofounder and now former owner of Complexly, an online education company he started with his brother John in 2012. I say former owner because Hank and John have just converted Complexly into a nonprofit and given up their ownership of the company in the process. That’s some of the purest Decoder bait that ever was, because it’s all about how you structure a company and how you make decisions about changing that structure. So of course I had to bring Hank back on to talk all about it. But in addition to being pure Decoder bait, the story of Complexly is also about media, and how any of us can look at the internet and video landscape of 2026 and try to do something meaningful and ethical with it — while still growing an audience and making enough money to survive. Verge subscribers, don’t forget you get exclusive access to ad-free Decoder wherever you get your podcasts. Head here. Not a subscriber? You can sign up here. If you’ve been following the Decoder or The Verge, you know I’ve been obsessed with all that for quite a while. About two years ago, Hank interviewed me for this show, and he and I talked a lot then about why I call The Verge the “last Website on Earth,” and how video has really taken over the world.  Regular Decoder listeners have also heard me tell a whole lot of CEOs and media executives that if I had to start over again now, The Verge would probably be a YouTube or TikTok channel. But starting a business on those platforms also means giving up a lot of control over your distribution, and Hank and I spent a lot of time talking about that in this episode.  Where you’ll hear Hank get particularly passionate is when he’s talking about where the money is, where it should be, and what prevents it from going there. Because it turns out there’s a lot of money sloshing around in the world. It’s just maybe not allocated to the people who are doing the work. This was a really fiery c

The Vergeabout 3 hours ago
Inside Microsoft’s big Xbox leadership shake-up

Asha Sharma named EVP and CEO, Microsoft Gaming. | Image: The Verge, Microsoft Xbox fans had been anticipating the retirement of Microsoft Gaming CEO Phil Spencer for years, but what most hadn't expected was the departure of Xbox president Sarah Bond too. For many outside the company, Bond seemed like Spencer's natural successor, a deputy of sorts. Microsoft CEO Satya Nadella and Microsoft CFO Amy Hood clearly didn't agree. Instead of picking Bond for the role, Microsoft promoted Asha Sharma, a former Microsoft AI executive, to the top of Xbox. The decision to overlook Bond might have surprised many Xbox fans, but for the more than a dozen current and former Microsoft employees I've been speaking to, it's felt inev … Read the full story at The Verge.

The Vergeabout 5 hours ago
Nothing couldn’t wait to show off the Phone 4A

The Phone 4A’s Glyph Bar can be seen here as a line of seven squares to the right of the camera island. | Image: Nothing After teasing the upcoming launch of its midrange Phone 4A last week, Nothing has now revealed what the rear of the device looks like. An official render of the Phone 4A shared on X shows off the brand's familiar transparent-industrial stylings, alongside a new "Glyph Bar" lighting feature located to the right of the triple camera island. This Glyph Bar features nine individually controllable mini-LEDs that appear as a line of seven square lights - six white, and one red - replacing the three LED light strips that surround the camera on Nothing's 3A devices. Nothing says that the Glyph Bar is 40 percent brighter than the previous A-series' … Read the full story at The Verge.

The Vergeabout 5 hours ago
Uber launches robotaxi support project to aid AV partners

Uber is moving aggressively into robotaxis, striking deals with new partners and promising big investments to support future fleets - basically everything it can do except design and build the vehicles itself. (It tried that once, unsuccessfully.) Now, the ridehail giant is launching a new initiative to support its third-party robotaxi partners called Uber Autonomous Solutions. Basically, Uber is taking many of the things it does for its drivers and couriers - vehicle financing, fleet management tools, regulatory assistance - and making them available for its third-party AV partners, companies like Wayve, WeRide, Nuro, Waabi, and others. I … Read the full story at The Verge.