Anthropic暂时禁止了OpenClaw的创建者使用Claude。

qimuai 发布于 阅读:25 一手编译

Anthropic暂时禁止了OpenClaw的创建者使用Claude。

内容来源:https://techcrunch.com/2026/04/10/anthropic-temporarily-banned-openclaws-creator-from-accessing-claude/

内容总结:

知名开源工具OpenClaw创始人彼得·斯坦伯格近日在社交平台X上披露,其Anthropic账户因“可疑活动”遭短暂封禁,此事迅速引发AI开发者社区关注。

事件起源于Anthropic近期调整了第三方工具接入政策,宣布Claude订阅服务将不再支持包括OpenClaw在内的外部工具,用户需通过API按使用量单独付费。斯坦伯格表示自己虽已遵守新规改用API,账户仍遭封禁。数小时后,在相关讨论发酵后,其账户得以恢复。一位Anthropic工程师在讨论中澄清,公司从未因使用OpenClaw封禁用户,并主动表示愿提供技术支持。

Anthropic解释称,定价调整是因为订阅制难以承载OpenClaw类工具的高强度使用模式——这类工具常需持续推理循环、自动重试任务并连接大量第三方服务,计算负载远超普通提示或脚本。但斯坦伯格质疑政策调整时机,暗示Anthropic在将热门功能整合至自家代理工具Cowork后,便对开源工具施加限制。

值得注意的是,斯坦伯格目前受雇于Anthropic的竞争对手OpenAI。当被问及为何仍使用Claude测试时,他解释这是为保障OpenClaw对所有模型提供商的兼容性,与其在OpenAI的产品战略工作相互独立。他也承认,由于Claude在OpenClaw用户中仍占较高使用比例,兼容性测试确有必要。

此次风波折射出AI行业生态中,开源工具开发者与商业模型提供商之间日益复杂的竞合关系。截至发稿,Anthropic未就账户恢复的具体原因作出回应,斯坦伯格亦未回复置评请求。

中文翻译:

"各位,未来要确保OpenClaw与Anthropic模型保持兼容可能会更困难了。"周五凌晨,OpenClaw创始人彼得·斯坦伯格在X平台发布了这条消息,并附上一张来自Anthropic的通知截图,显示其账户因"可疑活动"被暂停使用。

封禁并未持续太久。几小时后,随着这条帖子迅速传播,斯坦伯格表示账户已恢复。在数百条评论中——鉴于斯坦伯格目前受雇于Anthropic的竞争对手OpenAI,许多评论充斥着阴谋论——出现了一位Anthropic工程师的留言。这位工程师告诉这位知名开发者,Anthropic从未因使用OpenClaw封禁过任何用户,并表示愿意提供帮助。

目前尚不清楚这是否是账户恢复的关键(我们已就此询问Anthropic)。但整段对话从多个层面揭示了问题所在。

回顾近期事件:此次封禁发生在上周Anthropic宣布调整政策之后。这家AI模型公司表示,其Claude产品的订阅服务将不再覆盖"包括OpenClaw在内的第三方工具"。

根据新规,OpenClaw用户现在需要通过Claude的API按使用量单独付费。本质上,拥有自研智能体Cowork的Anthropic正在征收"爪具税"。斯坦伯格称自己已遵守新规使用API,却仍遭封禁。

Anthropic解释称,定价调整是因为订阅制无法承载爪具的"使用模式"。相比普通提示词或简单脚本,爪具可能涉及持续推理循环、自动重试任务及连接大量第三方工具,因此计算强度更高。

但斯坦伯格并不买账。他在Anthropic调整定价后发文指出:"时间点真是巧合,先是把热门功能塞进他们的封闭工具,接着就锁死开源生态。"虽未明指,他可能暗指Claude的Cowork智能体新增的功能,比如允许用户远程控制智能体分配任务的Claude Dispatch——该功能恰在OpenClaw调价前两周推出。

周五的事件再次暴露了斯坦伯格对Anthropic的不满。

有网友暗示部分责任在于他选择加入OpenAI而非Anthropic:"你本有机会选对阵营。"斯坦伯格犀利回应:"一家张开双臂欢迎,一家发律师函威胁。"

当多人质疑他为何不使用雇主OpenAI的模型时,他解释自己仅将Claude用于测试,以确保OpenClaw的更新不会影响Claude用户。他阐明:"必须区分两件事:我在OpenClaw基金会的工作是让工具适配所有模型提供商,而在OpenAI的职责是协助未来产品战略。"

许多网友指出,测试Claude的必要性恰恰说明该模型仍是OpenClaw用户更青睐的选择。当被问及Anthropic调价的影响时,斯坦伯格简短回应:"正在解决。"(这或许暗示了他在OpenAI的工作方向。)

斯坦伯格未回应置评请求。

英文来源:

“Yeah folks, it’s gonna be harder in the future to ensure OpenClaw still works with Anthropic models,” OpenClaw creator Peter Steinberger posted on X early Friday morning, along with a photo of a message from Anthropic saying his account had been suspended over “suspicious” activity.
The ban didn’t last long. A few hours later, after the post went viral, Steinberger said his account had been reinstated. Among hundreds of comments — many of them in conspiracy theory land, given that Steinberger is now employed by Anthropic rival OpenAI — was one by an Anthropic engineer. The engineer told the famed developer that Anthropic has never banned anyone for using OpenClaw and offered to help.
It’s not clear if that was the key that restored the account. (We’ve asked Anthropic about it.) But the whole message string was enlightening on many levels.
To recap the recent history: This ban followed news last week that subscriptions to Anthropic’s Claude would no longer cover “third-party harnesses including OpenClaw,” the AI model company said.
OpenClaw users now have to pay for that usage separately, based on consumption, through Claude’s API. In essence, Anthropic, which offers its own agent, Cowork, is now charging a “claw tax.” Steinberger said he was following this new rule and using his API but was banned anyway.
Anthropic said it instituted the pricing change because subscriptions weren’t built to handle the “usage patterns” of claws. Claws can be more compute-intensive than prompts or simple scripts because they may run continuous reasoning loops, automatically repeat or retry tasks, and tie into a lot of other third-party tools.
Steinberger, however, wasn’t buying that excuse. After Anthropic changed the pricing, he posted, “Funny how timings match up, first they copy some popular features into their closed harness, then they lock out open source.” Though he didn’t specify, he may have been referring to features added to Claude’s Cowork agent, such as Claude Dispatch, which lets users remotely control agents and assign tasks. Dispatch rolled out a couple of weeks before Anthropic changed its OpenClaw pricing policy.
Steinberger’s frustration with Anthropic was again on display Friday.
One person implied that some of this is on him for taking a job at OpenAI instead of Anthropic, posting, “You had the choice, but you went to the wrong one.” To which Steinberger replied: “One welcomed me, one sent legal threats.”
Ouch.
When multiple people asked him why he’s using Claude instead of his employer’s models at all, he explained that he only uses it for testing, to ensure updates to OpenClaw won’t break things for Claude users.
He explained: “You need to separate two things. My work at the OpenClaw Foundation where we wanna make OpenClaw work great for any model provider, and my job at OpenAI to help them with future product strategy.”
Multiple people also pointed out that the need to test Claude is because that model remains a popular choice for OpenClaw users over ChatGPT. He also heard that when Anthropic changed its pricing, to which he replied: “Working on that.” (So, that’s a clue about what his job at OpenAI entails.)
Steinberger did not respond to a request for comment.

TechCrunchAI大撞车

文章目录


    扫描二维码,在手机上阅读