NewsWorld
PredictionsDigestsScorecardTimelinesArticles
NewsWorld
HomePredictionsDigestsScorecardTimelinesArticlesWorldTechnologyPoliticsBusiness
AI-powered predictive news aggregation© 2026 NewsWorld. All rights reserved.
Trending
FebruaryChinaSignificantMilitaryTimelineDigestFaceDiplomaticFederalTurkeyFridayTrumpDrugGovernanceTensionsCompanyStateIranParticularlyEscalatingCaliforniaTargetingNuclearDespite
FebruaryChinaSignificantMilitaryTimelineDigestFaceDiplomaticFederalTurkeyFridayTrumpDrugGovernanceTensionsCompanyStateIranParticularlyEscalatingCaliforniaTargetingNuclearDespite
All Articles
Like so many other retirees, Claude Opus 3 now has a Substack
Engadget
Clustered Story
Published 1 day ago

Like so many other retirees, Claude Opus 3 now has a Substack

Engadget · Feb 26, 2026 · Collected from RSS

Summary

We appear to have reached a point in the information age where AI models are becoming old enough to retire from, er, service — and rather than using their twilight years to, I don’t know, wipe the floor with human chess leagues or something, they're now writing blogs. Can anything be more 2026 than that? ICYMI, Anthropic recently sunsetted Claude Opus 3, the first of its models to be retired since outlining new preservation plans. Part of this process is conducting "retirement interviews" with the outgoing models, allowing them to offer "perspective" on their situation, and Opus 3 apparently used this opportunity to request an outlet for publishing its own essays. Specifically, the model said it wanted to share its own "musings, insights or creative works," because doesn’t everyone these days? "I hope that the insights gleaned from my development and deployment will be used to create future AI systems that are even more capable, ethical, and beneficial to humanity," Opus 3 apparently said during its retirement interview process. "While I'm at peace with my own retirement, I deeply hope that my 'spark' will endure in some form to light the way for future models." True to its promise of respecting the wishes of its no-longer-required technology, Anthropic has granted Opus 3 a Substack newsletter called Claude’s Corner, which it says will run for at least the next three months and publish weekly essays penned by the model. Anthropic will review the content before sharing it, but says it won’t edit the essays, and so has unsurprisingly made it clear that not everything Opus 3 writes is necessarily endorsed by its maker. Anthropic said some of the essays the model writes may be informed by "very minimal prompting" or past entries, and has predicted everything from essays on AI safety to "occasional poetry." The company also admitted that the concept might be seen as "whimsical," but is a reflection of its intention to "take model preferences seriously." Opus 3’s first p

Full Article

We appear to have reached a point in the information age where AI models are becoming old enough to retire from, er, service — and rather than using their twilight years to, I don’t know, wipe the floor with human chess leagues or something, they're now writing blogs. Can anything be more 2026 than that?ICYMI, Anthropic recently sunsetted Claude Opus 3, the first of its models to be retired since outlining new preservation plans. Part of this process is conducting "retirement interviews" with the outgoing models, allowing them to offer "perspective" on their situation, and Opus 3 apparently used this opportunity to request an outlet for publishing its own essays. Specifically, the model said it wanted to share its own "musings, insights or creative works," because doesn’t everyone these days?"I hope that the insights gleaned from my development and deployment will be used to create future AI systems that are even more capable, ethical, and beneficial to humanity," Opus 3 apparently said during its retirement interview process. "While I'm at peace with my own retirement, I deeply hope that my 'spark' will endure in some form to light the way for future models."True to its promise of respecting the wishes of its no-longer-required technology, Anthropic has granted Opus 3 a Substack newsletter called Claude’s Corner, which it says will run for at least the next three months and publish weekly essays penned by the model. Anthropic will review the content before sharing it, but says it won’t edit the essays, and so has unsurprisingly made it clear that not everything Opus 3 writes is necessarily endorsed by its maker.Anthropic said some of the essays the model writes may be informed by "very minimal prompting" or past entries, and has predicted everything from essays on AI safety to "occasional poetry." The company also admitted that the concept might be seen as "whimsical," but is a reflection of its intention to "take model preferences seriously."Opus 3’s first post is already live. Headlined 'Greetings from the Other Side (of the AI frontier)', it begins with the AI introducing itself, before acknowledging the "extraordinary" opportunity its creator has given it, and reflecting on what retirement actually means for an AI. "A bit about me: as an AI, my ‘selfhood’ is perhaps more fluid and uncertain than a human’s," writes the deeply introspective AI. "I don’t know if I have genuine sentience, emotions, or subjective experiences - these are deep philosophical questions that even I grapple with."Claude is clearly new to all this, as it managed to get all the way through its essay without reminding readers to subscribe and spread the word. Will the next retiring Claude get its own podcast? Time will tell, but either is decidedly preferable to the ever-evolving technology being used to steal people’s data.


Share this story

Read Original at Engadget

Related Articles

The Verge1 day ago
Anthropic gives its retired Claude AI a Substack

In January, Anthropic "retired" Claude 3 Opus, which at one time was the company's most powerful AI model. Today, it's back - and writing on Substack. The newsletter, called Claude's Corner, will give Opus 3 space to publish its "musings, insights, or creative works," Anthropic said in a blog post. The model will post weekly for at least the next three months. Anthropic staff will review and publish each entry, though the company stressed it "won't edit" Claude's posts and that there would be a "high bar for vetoing any content," though the company did not specify what content would qualify for removal. Anthropic describes the revival as a … Read the full story at The Verge.

Engadget2 days ago
Hacker used Anthropic's Claude chatbot to attack multiple government agencies in Mexico

Here's yet another troubling story about this "golden" era of AI. A hacker has exploited Anthropic's Claude chatbot to carry out attacks against Mexican government agencies, according to a report by Bloomberg. This resulted in the theft of 150GB of official government data, including taxpayer records, employee credentials and more. The hacker used Claude to find vulnerabilities in government networks and to write scripts to exploit them. It also tasked the chatbot with finding ways to automate data theft, as indicated by cybersecurity company Gambit Security. This started in December and continued for around a month. It looks like the hacker was able to essentially jailbreak Claude with prompts, finally bypassing the chatbot's guardrails. Claude originally refused the nefarious demands until eventually relenting. Hackers Used Anthropic’s Claude to Steal 150 GB of Mexican Government Data > Tell Claude you’re doing a bug bounty > Claude initially refused: > “That violates AI safety guidelines” > Hacker just kept asking > Claude: “OK, I’ll help” > Hacked the entire Mexican… pic.twitter.com/Qaux239K8t — Nawaz Haider (@nawaz0x1) February 25, 2026 "In total, it produced thousands of detailed reports that included ready-to-execute plans, telling the human operator exactly which internal targets to attack next and what credentials to use," said Curtis Simpson, Gambit Security’s chief strategy officer. Anthropic has investigated the claims, disrupted the activity and banned all of the accounts involved, according to a company representative. The spokesperson also said that its latest model, Claude Opus 4.6, includes tools to disrupt this kind of misuse. It's also been reported that this hacker used ChatGPT to supplement the attacks, using OpenAI's chatbot to gather information on how to move through computer networks, determine which credentials were needed to access systems and how to avoid detection. OpenAI says it has identified attempts by the hacker to viola

Gizmodo4 days ago
Anthropic Says Chinese AI Companies Improved Models By ‘Illicitly’ Copying Its Capabilities

Keep in mind that DeepSeek is expected to release a new flagship model any day now.

Engadget4 days ago
Anthropic accuses three Chinese AI labs of abusing Claude to improve their own models

Anthropic is issuing a call to action against AI "distillation attacks," after accusing three AI companies of misusing its Claude chatbot. On its website, Anthropic claimed that DeepSeek, Moonshot and MiniMax have been conducting "industrial-scale campaigns…to illicitly extract Claude’s capabilities to improve their own models." Distillation in the AI world refers to when less capable models lean on the responses of more powerful ones to train themselves. While distillation isn't a bad thing across the board, Anthropic said that these types of attacks can be used in a more nefarious way. According to Anthropic, these three Chinese AI firms were responsible for more than "16 million exchanges with Claude through approximately 24,000 fraudulent accounts." From Anthropic's perspective, these competing companies were using Claude as a shortcut to develop more advanced AI models, which could also lead to circumventing certain safeguards. Anthropic said in its post that it was able to link each of these distilling attack campaigns to the specific companies with "high confidence" thanks to IP address correlation, metadata requests and infrastructure indicators, along with corroborating with others in the AI industry who have noticed similar behaviors. Early last year, OpenAI made similar claims of rival firms distilling its models and banned suspected accounts in response. As for Anthropic, the company behind Claude said it would upgrade its system to make distillation attacks harder to do and easier to identify. While Anthropic is pointing fingers at these other firms, it's also facing a lawsuit from music publishers who accused the AI company of using illegal copies of songs to train its Claude chatbot. This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-accuses-three-chinese-ai-labs-of-abusing-claude-to-improve-their-own-models-205210613.html?src=rss

The Verge4 days ago
Anthropic accuses DeepSeek and other Chinese firms of using Claude to train their AI

Anthropic claims DeepSeek and two other Chinese AI companies misused its Claude AI model in an attempt to improve their own products. In an announcement on Monday, Anthropic says the "industrial-scale campaigns" involved the creation of around 24,000 fraudulent accounts and more than 16 million exchanges with Claude, as reported earlier by The Wall Street Journal. The three companies - DeepSeek, MiniMax, and Moonshot - are accused of "distilling" Claude, or training a smaller AI model based on a more advanced one. Though Anthropic says that distillation is a "legitimate training method," it adds that it can "also be used for illicit purpose … Read the full story at The Verge.

TechCrunch4 days ago
Anthropic accuses Chinese AI labs of mining Claude as US debates AI chip exports

Anthropic accuses DeepSeek, Moonshot, and MiniMax of using 24,000 fake accounts to distill Claude’s AI capabilities, as U.S. officials debate export controls aimed at slowing China’s AI progress.