NewsWorld
PredictionsDigestsScorecardTimelinesArticles
NewsWorld
HomePredictionsDigestsScorecardTimelinesArticlesWorldTechnologyPoliticsBusiness
AI-powered predictive news aggregation© 2026 NewsWorld. All rights reserved.
All Articles
Anthropic’s Claude reports widespread outage
TechCrunch
Clustered Story
Published about 2 hours ago

Anthropic’s Claude reports widespread outage

TechCrunch · Mar 2, 2026 · Collected from RSS

Summary

Anthropic's AI chatbot Claude experienced widespread service disruptions on Monday morning, with thousands of users reporting issues accessing the bot.

Full Article

In Brief Posted: 5:31 AM PST · March 2, 2026 Image Credits:Anthropic Anthropic experienced widespread disruptions on Monday morning, with thousands of users reporting problems accessing Claude services. The outage seems to be affecting Claude.ai as well as Claude Code, though the company said the Claude API is working as intended. Most users experience the error when attempting to log in, as in the screenshot below. Image credits: TechCrunch “The issues we are seeing are related to Claude.ai and with the login/logout paths,” the company’s status page reads. Anthropic has not yet detailed what caused the outage, though the company said it has identified an issue and is implementing a fix. The disruption follows an influx of users to Claude, which seems to have benefited from the attention around the company’s fraught negotiations with the Pentagon. The chatbot app rose to the top of the App Store charts this weekend, overtaking archrival ChatGPT, after spending a long time below the top 20. U.S. President Donald Trump last week told federal agencies to stop using Anthropic products after a dispute over safeguards preventing the Department of Defense from using its AI models for mass domestic surveillance or fully autonomous weapons. Secretary of Defense Pete Hegseth said he would designate the company as a supply-chain threat, though Anthropic says it hasn’t yet received any formal notices. Topics Subscribe for the industry’s biggest tech news Latest in AI


Share this story

Read Original at TechCrunch

Related Articles

Financial Times3 days ago
DeepSeek to release long-awaited AI model in new challenge to US rivals

Chinese AI group has worked with Huawei to cut reliance on Nvidia chips

Engadget4 days ago
Like so many other retirees, Claude Opus 3 now has a Substack

We appear to have reached a point in the information age where AI models are becoming old enough to retire from, er, service — and rather than using their twilight years to, I don’t know, wipe the floor with human chess leagues or something, they're now writing blogs. Can anything be more 2026 than that? ICYMI, Anthropic recently sunsetted Claude Opus 3, the first of its models to be retired since outlining new preservation plans. Part of this process is conducting "retirement interviews" with the outgoing models, allowing them to offer "perspective" on their situation, and Opus 3 apparently used this opportunity to request an outlet for publishing its own essays. Specifically, the model said it wanted to share its own "musings, insights or creative works," because doesn’t everyone these days? "I hope that the insights gleaned from my development and deployment will be used to create future AI systems that are even more capable, ethical, and beneficial to humanity," Opus 3 apparently said during its retirement interview process. "While I'm at peace with my own retirement, I deeply hope that my 'spark' will endure in some form to light the way for future models." True to its promise of respecting the wishes of its no-longer-required technology, Anthropic has granted Opus 3 a Substack newsletter called Claude’s Corner, which it says will run for at least the next three months and publish weekly essays penned by the model. Anthropic will review the content before sharing it, but says it won’t edit the essays, and so has unsurprisingly made it clear that not everything Opus 3 writes is necessarily endorsed by its maker. Anthropic said some of the essays the model writes may be informed by "very minimal prompting" or past entries, and has predicted everything from essays on AI safety to "occasional poetry." The company also admitted that the concept might be seen as "whimsical," but is a reflection of its intention to "take model preferences seriously." Opus 3’s first p

The Verge4 days ago
Anthropic gives its retired Claude AI a Substack

In January, Anthropic "retired" Claude 3 Opus, which at one time was the company's most powerful AI model. Today, it's back - and writing on Substack. The newsletter, called Claude's Corner, will give Opus 3 space to publish its "musings, insights, or creative works," Anthropic said in a blog post. The model will post weekly for at least the next three months. Anthropic staff will review and publish each entry, though the company stressed it "won't edit" Claude's posts and that there would be a "high bar for vetoing any content," though the company did not specify what content would qualify for removal. Anthropic describes the revival as a … Read the full story at The Verge.

Engadget5 days ago
Hacker used Anthropic's Claude chatbot to attack multiple government agencies in Mexico

Here's yet another troubling story about this "golden" era of AI. A hacker has exploited Anthropic's Claude chatbot to carry out attacks against Mexican government agencies, according to a report by Bloomberg. This resulted in the theft of 150GB of official government data, including taxpayer records, employee credentials and more. The hacker used Claude to find vulnerabilities in government networks and to write scripts to exploit them. It also tasked the chatbot with finding ways to automate data theft, as indicated by cybersecurity company Gambit Security. This started in December and continued for around a month. It looks like the hacker was able to essentially jailbreak Claude with prompts, finally bypassing the chatbot's guardrails. Claude originally refused the nefarious demands until eventually relenting. Hackers Used Anthropic’s Claude to Steal 150 GB of Mexican Government Data > Tell Claude you’re doing a bug bounty > Claude initially refused: > “That violates AI safety guidelines” > Hacker just kept asking > Claude: “OK, I’ll help” > Hacked the entire Mexican… pic.twitter.com/Qaux239K8t — Nawaz Haider (@nawaz0x1) February 25, 2026 "In total, it produced thousands of detailed reports that included ready-to-execute plans, telling the human operator exactly which internal targets to attack next and what credentials to use," said Curtis Simpson, Gambit Security’s chief strategy officer. Anthropic has investigated the claims, disrupted the activity and banned all of the accounts involved, according to a company representative. The spokesperson also said that its latest model, Claude Opus 4.6, includes tools to disrupt this kind of misuse. It's also been reported that this hacker used ChatGPT to supplement the attacks, using OpenAI's chatbot to gather information on how to move through computer networks, determine which credentials were needed to access systems and how to avoid detection. OpenAI says it has identified attempts by the hacker to viola

Gizmodo7 days ago
Anthropic Says Chinese AI Companies Improved Models By ‘Illicitly’ Copying Its Capabilities

Keep in mind that DeepSeek is expected to release a new flagship model any day now.

Engadget7 days ago
Anthropic accuses three Chinese AI labs of abusing Claude to improve their own models

Anthropic is issuing a call to action against AI "distillation attacks," after accusing three AI companies of misusing its Claude chatbot. On its website, Anthropic claimed that DeepSeek, Moonshot and MiniMax have been conducting "industrial-scale campaigns…to illicitly extract Claude’s capabilities to improve their own models." Distillation in the AI world refers to when less capable models lean on the responses of more powerful ones to train themselves. While distillation isn't a bad thing across the board, Anthropic said that these types of attacks can be used in a more nefarious way. According to Anthropic, these three Chinese AI firms were responsible for more than "16 million exchanges with Claude through approximately 24,000 fraudulent accounts." From Anthropic's perspective, these competing companies were using Claude as a shortcut to develop more advanced AI models, which could also lead to circumventing certain safeguards. Anthropic said in its post that it was able to link each of these distilling attack campaigns to the specific companies with "high confidence" thanks to IP address correlation, metadata requests and infrastructure indicators, along with corroborating with others in the AI industry who have noticed similar behaviors. Early last year, OpenAI made similar claims of rival firms distilling its models and banned suspected accounts in response. As for Anthropic, the company behind Claude said it would upgrade its system to make distillation attacks harder to do and easier to identify. While Anthropic is pointing fingers at these other firms, it's also facing a lawsuit from music publishers who accused the AI company of using illegal copies of songs to train its Claude chatbot. This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-accuses-three-chinese-ai-labs-of-abusing-claude-to-improve-their-own-models-205210613.html?src=rss