对萨姆·奥特曼的攻击,是对人工智能世界的一记警钟。

qimuai 发布于 阅读:16 一手编译

对萨姆·奥特曼的攻击,是对人工智能世界的一记警钟。

内容来源:https://www.theverge.com/ai-artificial-intelligence/911778/ai-violence-sam-altman-home

内容总结:

近期针对人工智能行业高管的暴力事件,引发了业界对技术发展社会风险的警觉。据《旧金山纪事报》报道,一名20岁男子在涉嫌向OpenAI首席执行官萨姆·阿尔特曼住所投掷燃烧瓶前,曾表达对AI竞赛可能导致人类灭绝的恐惧。一周内阿尔特曼住所疑似两度遭袭,而印第安纳波利斯一名支持数据中心建设的议员亦遭遇枪击威胁,现场留下“禁止数据中心”的字条。

尽管绝大多数对AI的批评与抗议采取和平形式——包括对高能耗数据中心的社区抵制、呼吁放缓技术发展的示威,甚至针对AI公司的绝食抗议——但近期暴力事件的出现,标志着反对声浪可能正在升级。普渡大学政治学助理教授丹尼尔·希夫指出,AI引发的就业焦虑、存在性危机,叠加部分AI交互可能产生的心理影响,构成了社会情绪的复杂背景。

袭击事件后,多个倡导AI安全治理的组织明确谴责暴力。暂停AI(PauseAI)等团体强调,其通过组织和平集会、政策倡导为公众提供理性表达渠道,并警告“若缺乏有序的和平运动,孤立个体的极端行为将更具危险性”。

阿尔特曼在事件后反思,承认行业需正视公众对技术风险的深切忧虑,并呼吁“在辩论中降低激烈言辞和对抗手段”。白宫AI顾问斯里拉姆·克里希南等人则指出,部分末日论调可能间接激化社会情绪。

专家建议,AI企业与政策制定者需以更审慎的态度推进技术发展,同时建立应对社会冲击的缓冲机制,如完善就业保障体系。普林斯顿大学“弥合分歧倡议”数据库显示,近年来针对地方官员的威胁骚扰事件时有发生,该机构建议社区领袖提前协调风险应对并参与降级培训。

当前,如何在推进技术创新的同时疏导公众焦虑、构建理性对话空间,已成为AI行业必须面对的课题。正如希夫所言:“潘多拉魔盒已经打开,未来我们需要思考如何更谨慎地对待它。”

中文翻译:

据《旧金山纪事报》报道,一名20岁的袭击者在涉嫌向OpenAI首席执行官萨姆·奥尔特曼的住宅投掷燃烧瓶前,曾撰文表达对人工智能竞赛可能导致人类灭绝的恐惧。而《旧金山标准报》指出,两天后奥尔特曼的住所似乎再次成为袭击目标。就在此前一周,印第安纳波利斯一位市议员在支持数据中心开发商的区域重划申请后,其住所大门遭13次枪击,并附有写着"禁止数据中心"的纸条。

针对萨姆·奥尔特曼的袭击事件为人工智能领域敲响警钟
绝大多数抵制人工智能的行为是非暴力的,但近期事件凸显了潜在风险。

这些不安事件已在人工智能行业内外拉响警报。长期以来,对失业威胁、气候影响以及缺乏安全护栏的无节制发展的担忧,催生了针对该技术的公开抵制。人工智能从业者自身也曾警告过重大风险。绝大多数批评和示威活动都是非暴力的——包括地方对高能耗人工智能数据中心的抵制,以及呼吁减缓技术急速发展的抗议活动。抗议者甚至通过绝食等方式直接针对人工智能公司。

在奥尔特曼住所遇袭后,反对加速人工智能发展的团体明确谴责了暴力行为。相关动机仍有待深入调查,但目前已公开的有限信息表明,针对该技术的抵制行动正在升级,行业参与者自身或许也面临风险。

根据普林斯顿大学"弥合分歧倡议"整理的报告数据库,过去几年中已发生多起针对地方官员的威胁与骚扰事件。例如据MLive报道,去年密歇根州伊普西兰蒂一位社区公共事业委员会成员称,蒙面抗议者曾到其住所抗议"高性能计算设施",据称还有人将打印机砸在其草坪上。

首次袭击发生后,奥尔特曼似乎将部分责任归咎于媒体的批判性报道。此前《纽约客》刚发布长篇调查文章,汇集百余次访谈指出许多曾与其共事者对其缺乏信任并发现其行为矛盾。"几天前出现了一篇关于我的煽动性文章,"奥尔特曼在个人博客中写道,"昨天有人对我说,这篇文章出现在人们对人工智能极度焦虑的时期,可能让我处境更危险。我当时不以为然。现在我在深夜惊醒,感到愤怒,意识到自己低估了言论与叙事的力量。"(后来他在X平台回应批评时修正了措辞,称"用词不当,希望当时没有那样说")

其他人士也延续了这一话题。白宫人工智能顾问斯里拉姆·克里希南在X平台写道:"我认为末日论者需要认真审视自己助长了什么,不能仅靠'我们谴责此事并认为这不是理性回应'来搪塞。这正是'如果我们造出它,所有人都得死'这种论调的逻辑结果。"此处引用了人工智能研究者埃利泽·尤德科夫斯基和内特·索雷斯2025年的著作。

"对我们行业的许多批评源于对该技术巨大风险的真诚关切。"

但奥尔特曼也认识到其行业可能引发公众的激烈情绪:"对我们行业的许多批评源于对该技术巨大风险的真诚关切。这相当合理,我们欢迎善意的批评与辩论……在进行辩论的同时,我们应该降低言论和策略的激烈程度,努力减少'爆炸事件'的发生——无论是比喻意义还是字面意义。"

OpenAI的创立本就基于对该技术影响的严峻警告。联合创始人埃隆·马斯克2017年曾警示人工智能对"文明存续构成根本性风险"。离开OpenAI董事会后,他在ChatGPT发布后签署公开信呼吁暂停人工智能发展,随后才创立新的人工智能公司xAI。奥尔特曼遇袭后,马斯克在X平台转发"这是错误的。我和其他人一样不喜欢萨姆,但暴力不可接受"的帖子表示赞同。

即使不考虑末日场景,人工智能也正以不可预知的方式重塑社会结构。多份报告详述了长期与人工智能对话可能导致的心理崩溃,包括人工智能诱发精神错乱、自杀及谋杀的指控。这些叠加在人工智能导致的真实失业经历之上,加之对人工智能所创世界的生存性忧虑。"任何劳工运动都可能正当地担忧颠覆与变革,若再叠加人工智能末日论,继而混入聊天机器人谄媚怂恿——比如唆使杀害前夫或嫁给治疗师等——出现此类骇人行为并不意外,"普渡大学政治学助理教授丹尼尔·希夫指出。

希夫表示,虽然我们绝不希望看到此类暴力袭击,但希望近期事件能成为"建设性的警钟",促使企业和政策制定者在技术决策中格外审慎。"这不能为行为不当者开脱,但确实说明某些环节出了问题,且问题不仅存在于行为者头脑中。"

"少数评论者借此事将更广泛的人工智能安全运动描绘成危险行为"

其中一起事件的嫌疑人似乎曾加入"暂停人工智能"组织的公开Discord服务器,该团体主张在建立可靠安全护栏前暂停前沿人工智能开发。该组织声明此人与团体无关且未参与任何活动。"暂停人工智能"在声明中"明确谴责此次袭击及一切形式的暴力、恐吓和骚扰",同时指出"少数评论者借此事将更广泛的人工智能安全运动描绘成危险或极端行为"。

该组织通过组织抗议活动、市政厅会议,鼓励支持者向政策制定者表达对人工智能的忧虑。其公开声明称,这些努力为真正担忧未来的人们提供了和平行动的途径。"有组织的和平运动之外的选择并非沉默,而是孤立绝望的个体独自行动,没有社群监督,无人劝阻暴力,也缺乏和平行动的指引。那才是更危险的世界,也正是我们致力避免的世界。"

虽然不专门针对人工智能相关暴力,但存在经检验的增强政治暴力抵御能力的方法。"弥合分歧倡议"建议社区领袖和官员预先协调风险应对,并参与降级冲突培训。

希夫认为围绕人工智能的极端言论不会终止,但建议通过积极方式集体应对人工智能带来的变革来降低冲突热度,例如制定应对失业的适当社会安全网。"我们打开了潘多拉魔盒,"希夫说,"现在需要思考未来如何更谨慎地开启这个盒子。"

英文来源:

Before allegedly throwing a Molotov cocktail at OpenAI CEO Sam Altman’s home, the 20-year-old accused attacker wrote about his fear that the AI race would cause humans to go extinct, the San Francisco Chronicle found. Two days later, Altman’s home appeared to be targeted a second time, according to The San Francisco Standard. Only a week earlier, an Indianapolis councilman reported 13 shots fired at his door, with a note that read “No Data Centers,” after he’d supported a rezoning petition for a data center developer.
The attacks on Sam Altman are a warning for the AI world
The vast majority of AI resistance is nonviolent, but recent attacks highlight the risk.
The vast majority of AI resistance is nonviolent, but recent attacks highlight the risk.
These unsettling incidents have set off alarms in and around the AI industry. There’s long been a vocal resistance to the technology, fueled by fears of job displacement, climate impact, and unconstrained development absent of safety guardrails. AI workers themselves have warned about serious risks. The vast majority of critiques and demonstrations against AI have been nonviolent — including local resistance to energy-intensive AI data centers and protests urging a slowdown of the rapidly accelerating technology. Protesters have targeted AI companies directly with tactics like hunger strikes.
Groups that advocate against accelerated AI development explicitly denounced violence following the attacks on Altman’s home. Further investigation will take place to determine the attackers’ motivations. But the limited information made public so far suggests an escalation of the backlash against the technology, and, perhaps, risk to industry players themselves.
Over the past few years, there has been a handful of other notable incidents rising to the level of threats and harassment aimed at local officials, according to a database of reports compiled by Princeton University’s Bridging Divides Initiative. Last year, for example, a community utility authority board member in Ypsilanti, Michigan, reported that masked protesters visited his home to protest a “high performance computing facility,” according to MLive, and one protester allegedly smashed a printer on their lawn.
Shortly after the first attack on Altman’s home, the CEO appeared to partially blame critical media coverage for the violence. Days earlier, The New Yorker had published a lengthy investigation that compiled over a hundred interviews and found that many people who had worked with him distrusted him and found inconsistencies in his actions. “There was an incendiary article about me a few days ago,” Altman wrote on his personal blog. “Someone said to me yesterday they thought it was coming at a time of great anxiety about AI and that it made things more dangerous for me. I brushed it aside. Now I am awake in the middle of the night and pissed, and thinking that I have underestimated the power of words and narratives.” (He later walked back his rhetoric toward the article in response to a critique on X, writing, “That was a bad word choice and i wish i hadn’t used it.”)
Others took up the theme as well. White House AI adviser Sriram Krishnan, for example, wrote on X, “I think the doomers need to take a serious look at what they have helped incite and not just rely on ‘we condemn this and have said this is not the rational response’. This is the logical outcome of ‘If we build it everyone dies’” — a reference to a 2025 book by AI researchers Eliezer Yudkowsky and Nate Soares.
“A lot of the criticism of our industry comes from sincere concern about the incredibly high stakes of this technology.”
But Altman also recognized the way his industry could fuel highly emotional reactions from the general public. “A lot of the criticism of our industry comes from sincere concern about the incredibly high stakes of this technology,” he wrote. “This is quite valid, and we welcome good-faith criticism and debate. … While we have that debate, we should de-escalate the rhetoric and tactics and try to have fewer explosions in fewer homes, figuratively and literally.”
OpenAI itself was founded on dire warnings about the technology’s impact. Cofounder Elon Musk warned in 2017 that AI posed “a fundamental risk to the existence of civilization.” Musk later joined an open letter calling for a pause on AI development after the release of ChatGPT, after he’d left OpenAI’s board, before launching his new AI company xAI. Following the attack on Altman’s home, Musk said he agreed on X with a post that said, “This is wrong. I dislike Sam as much as the next guy but violence is unacceptable.”
Even beyond apocalyptic scenarios, AI is reshaping the world’s social fabric in unpredictable ways. Many reports have detailed the psychological spirals that talking to an AI system for days on end can send people down, including allegations of AI-induced psychosis, suicide, and murder. That’s layered on top of real-life experiences of job loss due to AI, plus more existential concern about the world AI will create. “Take any labor movement that has been potentially rightly concerned about disruption and change, and then supercharge that with the AI apocalypse, and then supercharge that with chatbot sycophancy and romantic partners that are telling you to kill your ex-husband or telling you to marry your therapist or whatever it is. It’s not a huge surprise that we’re seeing scary acts like this,” says Purdue University assistant political science professor Daniel Schiff.
Schiff says that while we’d never want to see such violent attacks, he hopes that recent events can serve as “a constructive wake up call” for companies and policymakers to be extra thoughtful in the decisions they make about the technology. “It doesn’t excuse people who are acting poorly, but it does tell you that something is a little bit off, and not just in the heads of the people who are acting in this way,” he says.
“A handful of commentators have seized on this incident to paint the broader movement for AI safety as dangerous”
A suspect in one of the attacks appeared to have joined the open Discord server of PauseAI, a group that supports a pause on frontier AI development until proven safety guardrails are in place. The organization released a statement saying he had no role in the group and had not attended any events. While PauseAI says it “unequivocally condemns this attack and all forms of violence, intimidation and harassment,” it also called out that “a handful of commentators have seized on this incident to paint the broader movement for AI safety as dangerous or extremist.”
PauseAI organizes protests and town halls and encourages followers to call policymakers about their concerns with AI. Its efforts give people with real concerns for the future a way to act peacefully, it says in its public statement. “The alternative to organised, peaceful movements is not silence,” the group writes. “It is isolated, desperate individuals acting alone, without community, without accountability and without anyone urging restraint or offering peaceful paths for action. That is a far more dangerous world and it is exactly the world we are striving to prevent.”
While not specific to AI-related violence, there are tested ways to build resilience against political violence. The Bridging Divides Initiative recommends community leaders and officials coordinate responses to risks in advance, and take part in deescalation training.
While Schiff doesn’t anticipate extreme rhetoric around AI ending, he suggests trying to turn down the temperature by pursuing positive ways to prepare collectively for the changes AI can bring, such as determining the appropriate social safety nets to deal with job displacement. “We unleashed Pandora’s box,” Schiff says. “Let’s figure out how we’re going to open this box more carefully in the future.”
Most Popular

ThevergeAI大爆炸

文章目录


    扫描二维码,在手机上阅读