裁决冲突令Anthropic陷入"供应链风险"困境

qimuai 发布于 阅读:35 一手编译

裁决冲突令Anthropic陷入"供应链风险"困境

内容来源:https://www.wired.com/story/anthropic-appeals-court-ruling/

内容总结:

美国华盛顿特区联邦巡回上诉法院4月8日裁定,人工智能公司Anthropic未能满足“严格标准”,暂不能撤销美国国防部对其施加的“供应链风险”认定。这一裁决与旧金山联邦地区法院上月作出的判决相冲突,目前尚不明确两项相互矛盾的初步判决将如何协调。

本案涉及两项具有相似效力的供应链法律,旧金山与华盛顿法院分别针对其中一项进行审理。Anthropic表示,该公司是首个同时被援引这两项法律进行制裁的美国企业,此类法律通常用于惩罚构成国家安全风险的外国企业。

由三名法官组成的上诉合议庭在裁决书中指出,此案“前所未有”,若中止供应链风险认定,“将迫使美军在重大持续军事冲突中,继续与一家不受欢迎的关键人工智能服务供应商合作”。法庭承认该认定可能对Anthropic造成财务损失,但强调不应冒险“对军事行动造成重大司法干预”或“轻易推翻”军方对国家安全作出的判断。

此前,旧金山法院认为国防部可能出于恶意对Anthropic实施制裁,动机源于对该AI公司限制技术使用范围及公开批评相关限制的不满。该法院上周下令撤销供应链风险标签,特朗普政府随后恢复五角大楼及联邦政府各部门对Anthropic AI工具的访问权限。

Anthropic发言人表示,公司对华盛顿法院“认识到这些问题需要迅速解决”感到欣慰,并坚信法院终将认定相关供应链风险认定不合法。美国国防部未立即回应置评请求,但代理司法部长托德·布兰奇在社交媒体发文称此次裁决是“军事备战能力的重大胜利”,强调“军事权力与作战指挥权属于总司令和战争部,而非科技公司”。

此案正在检验行政机构对科技公司行为的管辖边界。随着五角大楼在对伊朗军事行动中部署人工智能,Anthropic与特朗普政府的法律斗争持续升级。该公司辩称,其因坚持AI工具Claude在无监督执行致命无人机打击等敏感任务时缺乏所需精确度而遭到非法惩罚。

法律专家指出,Anthropic对政府的诉讼具有较强法理依据,但法院在涉及国家安全的事项上往往不愿否决白宫立场。有AI研究人员认为,五角大楼的行动“压制了关于AI系统性能的专业讨论”。Anthropic在诉讼中称该认定导致其业务损失,政府律师则主张该认定禁止国防部及其承包商在军事项目中使用Claude AI。

分析人士指出,只要特朗普仍在任,Anthropic可能难以恢复其在联邦政府中的重要市场地位。两起诉讼的最终判决可能需数月时间,华盛顿法院定于5月19日举行口头辩论。目前双方均未详细披露国防部具体如何使用Claude AI,也未说明从谷歌DeepMind、OpenAI等其他AI工具过渡的进展。军方表示已采取措施确保过渡期间Anthropic无法故意破坏其AI工具。

中文翻译:

华盛顿特区一家联邦上诉法院周三裁定,人工智能公司Anthropic"未能满足严格标准",无法暂时解除五角大楼对其施加的供应链风险认定。这项裁决与旧金山联邦地区法院法官上月作出的判决相冲突,目前尚不明确这两项矛盾的初步判决将如何协调。

美国政府依据两部具有相似效力的供应链法律对Anthropic实施制裁,而旧金山与华盛顿的法院分别仅针对其中一部法律进行裁决。Anthropic表示,自己是首家同时被这两部法律认定的美国企业,这类法律通常用于惩罚危害国家安全的外国公司。

由三名法官组成的上诉合议庭在周三的裁决书中写道:"若批准暂缓执行令,将迫使美军在重大军事冲突持续期间,继续与一家不受欢迎的关键人工智能服务供应商合作。"法官们称此案"前所未有",并表示虽然持续的风险认定可能使Anthropic蒙受经济损失,但他们不愿冒险"对军事行动造成重大司法干预",也不会"轻易推翻"军方对国家安全的判断。

旧金山法院的法官则认为,国防部很可能出于恶意针对Anthropic——源于对该AI公司限制技术使用范围的提议及其公开批评禁令的不满。该法官上周下令撤销供应链风险标签,特朗普政府随后恢复五角大楼及整个联邦政府对Anthropic人工智能工具的使用权限。

Anthropic发言人丹妮尔·科恩表示,公司感谢华盛顿特区法院"认识到这些问题需要迅速解决",并坚信"法院终将认定这些供应链风险标记属于非法行为"。

国防部未立即回应置评请求,但代理司法部长托德·布兰奇在X平台发表声明称:"华盛顿巡回法院今日允许政府将Anthropic列为供应链风险,这是战备状态的重大胜利。我们的立场始终明确——若要将该技术整合进敏感系统,军队必须完全接入Anthropic的模型。军事权威与作战指挥权属于总司令与战争部,而非科技公司。"

这些案件正在检验行政机构对科技公司行为有多大的管控权。Anthropic与特朗普政府的较量白热化之际,五角大楼正将人工智能投入对伊朗的军事行动。该公司辩称,自己因坚持其AI工具Claude缺乏执行某些敏感操作(如在无人工监督下实施致命无人机打击)所需的精确性而遭到非法处罚。

多位政府合同与企业权利专家向《连线》杂志表示,Anthropic对政府的诉讼理由充分,但法院有时会拒绝在国家安全相关事务上否决白宫决定。部分AI研究人员认为,五角大楼对Anthropic的行动"压制了关于AI系统性能的专业讨论"。

Anthropic在法庭上声称,风险认定导致其业务受损。政府律师则主张,该认定禁止五角大楼及其承包商在军事项目中使用该公司的Claude人工智能。只要特朗普仍在执政,Anthropic可能难以恢复在联邦政府中的重要地位。

针对两项诉讼的最终判决可能还需数月时间。华盛顿法院定于5月19日举行口头辩论。

目前各方极少披露国防部具体如何使用Claude,也未说明在将人员转向谷歌DeepMind、OpenAI等其他AI工具方面取得多大进展。自称为"战争部"的特朗普政府军方表示,已采取措施确保Anthropic无法在过渡期间故意破坏其AI工具。

2026年4月8日美东时间7:27更新:本文已补充代理司法部长托德·布兰奇的声明。

英文来源:

Anthropic “has not satisfied the stringent requirements” to temporarily lose the supply-chain-risk designation imposed by the Pentagon, a US appeals court in Washington, DC, ruled on Wednesday. The decision is at odds with one issued last month by a lower court judge in San Francisco, and it wasn’t immediately clear how the conflicting preliminary judgments would be resolved.
The government sanctioned Anthropic under two different supply-chain laws with similar effects, and the San Francisco and Washington, DC, courts are each ruling on only one of them. Anthropic has said it is the first US company to be designated under the two laws, which are typically used to punish foreign businesses that pose a risk to national security.
“Granting a stay would force the United States military to prolong its dealings with an unwanted vendor of critical AI services in the middle of a significant ongoing military conflict,” the three-judge appellate panel wrote on Wednesday in what they described as an unprecedented case. The panel said that while Anthropic may suffer financial harm from the ongoing designation, they did not want to risk “a substantial judicial imposition on military operations” or “lightly override” the military’s judgments on national security.
The San Francisco judge had found that the Department of Defense likely acted in bad faith against Anthropic, driven by frustration over the AI company’s proposed limits on how its technology could be used and its public criticism of those restrictions. The judge ordered the supply-chain risk label removed last week, and the Trump administration complied by restoring access to Anthropic AI tools inside the Pentagon and throughout the rest of the federal government.
Anthropic spokesperson Danielle Cohen says the company is grateful the Washington, DC, court “recognized these issues need to be resolved quickly” and remains confident “the courts will ultimately agree that these supply chain designations were unlawful.”
The Department of Defense did not immediately respond to a request for comment, but acting attorney general Todd Blanche posted a statement on X. “Today’s DC Circuit stay allowing the government to designate Anthropic as a supply-chain risk is a resounding victory for military readiness,” he wrote. “Our position has been clear from the start—our military needs full access to Anthropic’s models if its technology is integrated into our sensitive systems. Military authority and operational control belong to the Commander-in-Chief and Department of War, not a tech company.”
The cases are testing how much power the executive branch has over the conduct of tech companies. The battle between Anthropic and the Trump administration is also playing out as the Pentagon deploys AI in its war against Iran. The company has argued it is being illegally punished for insisting that its AI tool Claude lacks the accuracy needed for certain sensitive operations such as carrying out deadly drone strikes without human supervision.
Several experts in government contracting and corporate rights have told WIRED that Anthropic has a strong case against the government, but the courts sometimes refuse to overrule the White House on matters related to national security. Some AI researchers have said the Pentagon’s actions against Anthropic “chills professional debate” about the performance of AI systems.
Anthropic has claimed in court that it lost business because of the designation, which government lawyers contend bars the Pentagon and its contractors from using the company's Claude AI as part of military projects. And as long as Trump remains in power, Anthropic may not be able to regain the significant foothold it held in the federal government.
Final decisions in the company’s two lawsuits could be months away. The Washington court is scheduled to hear oral arguments on May 19.
The parties have revealed minimal details so far about how exactly the Department of Defense has used Claude or how much progress it has made in transitioning staff to other AI tools from Google DeepMind, OpenAI, or others. The military, which under President Trump calls itself the Department of War, has said it has taken steps to ensure Anthropic can’t purposely try to sabotage its AI tools during the transition.
Update 4/8/26 7:27 EDT: This story has been updated to include a statement form acting attorney general Todd Blanche.

连线杂志AI最前沿

文章目录


    扫描二维码,在手机上阅读