法官叫停Anthropic供应链风险认定程序。

内容来源:https://www.wired.com/story/anthropic-supply-chain-risk-designation-injunction/
内容总结:
美国联邦法官近日作出初步裁决,暂时禁止国防部将人工智能公司Anthropic列为"供应链风险企业",这为该公司客户恢复合作扫清了部分障碍。旧金山地区法院法官Rita Lin在周四的裁定中指出,国防部此前对Anthropic的风险认定"可能既违反法律又武断专横",并批评政府机构似乎存在非法"打压"该企业的行为。
此事源于国防部近期以Anthropic对AI工具Claude设置使用限制为由,认定其存在供应链风险,导致该公司的政府业务受阻。Anthropic对此提起两项诉讼,指控相关制裁违宪。法官在最新裁定中恢复了2月27日之前的法律状态,但强调国防部仍可依法终止与Anthropic的合作。
目前该裁决需等待一周后才生效,且华盛顿特区联邦上诉法院仍在审理Anthropic提起的另一项诉讼。尽管最终判决时间未定,但此次初步禁令已为这家生成式AI公司赢得了维护商业声誉的重要缓冲空间。
中文翻译:
美国人工智能公司Anthropic赢得一项初步禁令,禁止美国国防部将其列为供应链风险企业,这可能为该公司客户恢复与其合作扫清障碍。旧金山联邦地区法官丽塔·林周四作出的这项裁决,对五角大楼具有象征性的挫败意义,同时为这家试图维护业务与声誉的生成式AI公司注入强心剂。
林法官在阐述临时救济理由时写道:"国防部将Anthropic列为'供应链风险'的决定很可能既违反法律,又具有武断任意性。国防部没有提供合理依据,仅从Anthropic坚持使用限制条款就推断其可能成为破坏者。"
Anthropic与五角大楼均未立即对裁决置评请求作出回应。过去两年间,美国国防部一直依赖Anthropic的Claude人工智能工具撰写敏感文件和分析机密数据。但本月在认定Anthropic不可信后,国防部开始停用Claude系统。五角大楼官员列举了诸多事例,指控Anthropic对其技术施加或试图施加使用限制,而特朗普政府认为这些限制并无必要。
政府最终发布多项指令,包括将该公司列为供应链风险企业,导致联邦政府各部门逐步停用Claude系统,并重创Anthropic的销售与公众声誉。该公司提起两起诉讼,指控这些制裁措施违宪。在周二的听证会上,林法官表示政府似乎存在非法"扼杀"和"惩罚"Anthropic的行为。
林法官周四的裁决将"恢复2月27日指令发布前的状态"。她在裁决书中写道:"本禁令不阻止任何被告采取当时可用的合法行动。例如,本命令不要求国防部使用Anthropic的产品或服务,也不禁止国防部转向其他人工智能供应商,只要这些行动符合现行法规、法令和宪法条款。"
该裁决意味着五角大楼及其他联邦机构仍可自由取消与Anthropic的合作,并要求将Claude集成到自有工具的承包商停止使用,但不能以供应链风险认定作为依据。
由于林法官的命令将在一周后生效,其即时影响尚不明确。而华盛顿特区的联邦上诉法院尚未对Anthropic提起的第二起诉讼作出裁决,该诉讼主要针对禁止该公司向军方提供软件的另一项法律。
但Anthropic可凭借此项裁决向那些担忧与行业"弃儿"合作的客户表明,从长远来看法律可能站在其一边。林法官尚未设定作出最终裁决的时间表。
英文来源:
Anthropic won a preliminary injunction barring the US Department of Defense from labeling it a supply-chain risk, potentially clearing the way for customers to resume working with the company. The ruling on Thursday by Rita Lin, a federal district judge in San Francisco, is a symbolic setback for the Pentagon and a significant boost for the generative AI company as it tries to preserve its business and reputation.
“Defendants’ designation of Anthropic as a ‘supply chain risk’ is likely both contrary to law and arbitrary and capricious,” Lin wrote in justifying the temporary relief. “The Department of War provides no legitimate basis to infer from Anthropic’s forthright insistence on usage restrictions that it might become a saboteur.”
Anthropic and the Pentagon did not immediately respond to requests to comment on the ruling.
The Department of Defense, which calls itself the Department of War, has relied on Anthropic’s Claude AI tools for writing sensitive documents and analyzing classified data over the past couple of years. But this month, it began pulling the plug on Claude after determining that Anthropic could not be trusted. Pentagon officials cited numerous instances in which Anthropic allegedly placed or sought to put usage restrictions on its technology that the Trump administration found unnecessary.
The administration ultimately issued several directives, including designating the company a supply-chain risk, which have had the effect of slowly halting Claude usage across the federal government and hurting Anthropic’s sales and public reputation. The company filed two lawsuits challenging the sanctions as unconstitutional. In a hearing on Tuesday, Lin said the government had appeared to illegally “cripple” and “punish” Anthropic.
Lin’s ruling on Thursday “restores the status quo” to February 27, before the directives were issued. “It does not bar any defendant from taking any lawful action that would have been available to it” on that date, she wrote. “For example, this order does not require the Department of War to use Anthropic’s products or services and does not prevent the Department of War from transitioning to other artificial intelligence providers, so long as those actions are consistent with applicable regulations, statutes, and constitutional provisions.”
The ruling suggests the Pentagon and other federal agencies are still free to cancel deals with Anthropic and ask contractors that integrate Claude into their own tools to stop doing so, but without citing the supply-chain risk designation as the basis.
The immediate impact is unclear because Lin’s order won’t take effect for a week. And a federal appeals court in Washington, DC has yet to rule on the second lawsuit Anthropic filed, which focuses on different law under which the company was also barred from providing software to the military.
But Anthropic could use Lin’s ruling to demonstrate to some customers concerned about working with an industry pariah that the law may be on its side in the long run. Lin has not set a schedule to make a final ruling.
文章标题:法官叫停Anthropic供应链风险认定程序。
文章链接:https://news.qimuai.cn/?post=3661
本站文章均为原创,未经授权请勿用于任何商业用途