AI周刊第483期:百年之后:契约中的幽灵

qimuai 发布于 阅读:28 一手编译

AI周刊第483期:百年之后:契约中的幽灵

内容来源:https://aiweekly.co/issues/100-years-from-now-the-ghost-in-the-contract

内容总结:

AI时代责任真空:当科技巨头卸责与人类思考让渡并行

本周,我们展望百年后的世界,探讨当人类史上最强大的系统——人工智能——日益渗透生活,其背后的“问责制”却悄然消失的隐忧。当前,AI技术正以前所未有的速度发展,但构建这些系统的公司却在法律层面精心设计,将责任风险降至最低。

法律“防火墙”:百元封顶的责任条款
多家领先AI企业在其服务条款中明确限制自身责任。Anthropic将用户索赔总额上限设定为“六个月服务费或100美元中较高者”;微软Copilot则直接声明其服务“仅用于娱乐目的”,要求用户“自担风险”;谷歌Gemini亦免除所有间接损害责任,并警告用户不要依赖其进行专业决策。这些条款无异于将使用风险完全转嫁给用户与商业客户。

行业领袖的“让渡”呼吁与责任悖论
与此同时,科技领袖们公开鼓励人类向AI交出思考权。Palantir首席执行官亚历克斯·卡普呼吁年轻人远离人文学科;英伟达创始人黄仁勋多次建议“停止学习编程”;OpenAI的萨姆·奥尔特曼则以“丰饶”为愿景,构建类似宗教叙事的信任体系。然而,若人类不再掌握核心技能(如编程),将无法有效核查AI输出错误,法律意义上的“人类负责”条款便形同虚设。Linux内核维护团队近期明确要求所有代码必须由人类最终审核并承担法律责任,这一原则正与主流AI公司的责任规避形成鲜明对比。

监管博弈:产业游说压制问责立法
2024年,美国加利福尼亚州提出SB 1047法案,要求对AI进行安全测试并设立灾难性损害责任条款,虽获议会高票通过及民意支持,却在科技巨头密集游说下遭州长否决。行业政治影响力急速膨胀——OpenAI联邦游说支出一年内增长近七倍。这表明,在资本驱动下,建立强制性问责体系的立法努力正面临严峻阻力。

从代码到生命:失控的军事应用敲响警钟
责任缺失的后果已在军事领域显现。据报道,以色列在加沙军事行动中使用AI系统“薰衣草”生成打击名单,错误率约10%,人类操作员仅用约20秒审核即授权攻击,导致大量平民伤亡。该事件揭示出:当AI系统辅助生死决策,而人类仅充当“橡皮图章”时,问责链条便彻底断裂。

百年展望:增强抑或奴役?
若当前趋势延续,2124年或将步入一个“AI无处不在,问责无处可寻”的世界。所谓“增强的人类”,可能成为签署了全部免责协议、让渡了所有判断权的个体,生活在一个由不可追责系统掌控的时代。真正的“增强”,或许恰恰在于保持说“不”的能力、保留独立判断、坚守法律追索权与核心技术自主性。

历史表明,仅靠资本自律无法解决系统性责任缺失——烟草行业耗时五十年才在法律强制下部分承担责任,且无人入狱。如今AI游说力量已远超当年烟草行业。当技术神坛被筑起,王座不会自动让出,唯有人类自身的清醒与争取,方能守住文明的底线。

中文翻译:

这里是《百年之后》系列周刊。每周,我们跨越一个世纪,想象这样一个世界:人类用百年时间消化了我们今天才刚刚开始构建的事物,并在此过着平凡生活。我们不做预言——只诚实地推演当下选择将导向何方。

本周主题:当有史以来最强大的系统不再需要承担责任,会发生什么?

欢迎订阅《AI周刊深度解读》
即时追踪热点事件,深入探讨您关心的议题,免费获取斯坦福、麻省理工等学府提供的50余门课程。
深度解读 | 每日快讯 | 学习人工智能

人工智能不在乎责任。它本就不具备这种意识。作为输出结果的系统,对机器而言,毁掉一份职业和拯救一条生命并无区别。

好吧,锤子也不在乎责任。

但构建这些系统的人却要求我们将思考权移交。Palantir公司首席执行官亚历克斯·卡普——这位哲学博士——刚刚告诫下一代放弃人文学科。英伟达的黄仁勋两年来不断劝说年轻人停止学习编程。萨姆·奥尔特曼谈论“丰饶时代”的口吻,宛如牧师布道天堂。这套说辞充满神学意味:交出你的判断,信任神谕,机器比你看得更远。

假设我们照做了。我们究竟将思考权交给了什么?

一个早已将自己从法律等式中抹除的实体。

Anthropic公司的消费者条款将总赔偿责任上限设定为“六个月服务费或100美元中的较高者”。当Claude代码编辑器编写出糟糕代码,它既消耗您的额度修复错误,又让您为所有错误买单。即便它明天摧毁您的生产数据库——赔偿上限仍是100美元。

微软走得更远。Copilot使用条款声明:
“Copilot仅限娱乐用途。它可能出错,也可能无法按预期工作。请勿依赖Copilot获取重要建议。使用风险自负。”

仅限娱乐用途——这与占卜师惯用的免责声明如出一辙。而微软正以每席位30美元的价格向企业销售这款工具,并将其深度植入Windows系统。

Gemini的服务条款以“现状”提供,免除所有间接损害责任,并警告:“请勿依赖本服务获取医疗、法律、财务或其他专业建议。”这几乎涵盖了知识工作者的大部分职能。

模式如出一辙:交出你的思考,交出你的金钱,交出你的专业。当你所信赖的系统出错,后果由你自行承担。

现在叠加黄仁勋的论调:不必学习编程。那么——如果你无法解读输出结果,如何识破幻觉?劝你停止学习的人,正在销售制造错误的硬件。编写错误代码的模型所属公司,只需赔偿你100美元。

这是神君逻辑:臣服,信任,若收成不佳便是你心不诚。神明不会错。神明不可诉。

与此相对,一个反例刚刚出现。Linux内核维护者发布了关于AI生成代码的正式政策:AI可辅助,但AI无权批准。在这个支撑大部分互联网的操作系统内核中,每一行代码都标注着人类姓名,承担完全法律责任。“工具使用者须对输出内容负责”——这就是全部政策。

而AI公司的立场截然相反:AI完成工作,人类承担全责。

陷阱在于:若黄仁勋的主张获胜,无人学习编程,Linux政策便形同虚设。你无法让一个看不懂代码的人为审批负责。签名沦为仪式,成为献给法律体系的鲜血祭品。

若你认为监管能解决此局,请看实际责任法案的遭遇。

2024年,加利福尼亚州通过SB 1047法案——要求安全测试、设置紧急停止开关、对灾难性损害承担责任。议会投票结果:41票赞成,9票反对。所有民调均显示公众支持。然而OpenAI、Meta、谷歌及a16z大力游说反对,纽森州长最终否决。行业获胜。OpenAI联邦游说支出从2023年的26万美元飙升至2024年的176万美元,激增七倍;Anthropic的游说投入也翻了一番以上。

技术奇点可以在无责任状态下降临。物理定律从未要求有史以来最强大的系统必须向任何人负责。

当所有利益导向都背道而驰,如何迫使私营行业承担责任?

请看烟草业:五十年间压制研究、制造疑云、收买医生,导致数百万人死亡。直到1998年,46州总检察长联合发起法律围剿,才迫使行业停止恶行。无人入狱。这就是资本主义自我监管下,企业为文明损害承担责任的形态——迟来五十年,仍无镣铐加身。

而AI游说力度已远超烟草业巅峰时期。

至此,问题已超越代码范畴,关乎生死存亡。

在加沙,以色列AI系统“薰衣草”生成含3.7万人的清除名单,错误率高达10%。人类分析员仅用约20秒审核每个目标即授权空袭。数千平民死于机器标记、人类仓促批准的房屋内。揭露此事的记者尤瓦尔·亚伯拉罕一语道破:“基于AI的战争让人得以逃避责任。”

仍是那些公司:Palantir运行五角大楼的“梅文”AI项目;Anthropic与OpenAI竞逐国防合同。劝你停止思考的哲学家,正是兜售目标锁定系统的人。

这与100美元责任上限同构。只不过以生命为代价。

2124年正朝此方向演进:AI无处不在,责任无处可寻。那些自称“增强者”的人,将签署所有弃权书,交出所有判断权,最终醒来的世界由不可诉讼的系统统治——你集成开发环境中的代码与头顶盘旋的无人机出自同一家公司,遵循相同许可条款,受同一批说客庇护。

“增强”实为权力反义词的营销话术。真正的增强者,是保有拒绝能力之人。保有判断力。保有诉讼权。保有代码掌控力。保有扳机决定权。

神君不会放弃王座。必须亲手夺取。

然而当下,无人尝试。

【快速调研】
未来百年最令您忧虑的是?
⚔️ 杀戮机器无人担责
💼 企业有罪不罚的设计
🧠 丧失思考能力的一代
🧬 增强者与未增强者的鸿沟
👁️ 永无止境的监控
⚡ 以上皆非
请选择。我们将在下周探讨得票最高的议题。

您有何见解?
欢迎参与讨论——分享您对此议题的看法。
登录后即可评论。

英文来源:

This is 100 Years From Now, a weekly series. Once a week, we skip ahead a century and imagine ordinary life in a world that's had a hundred years to absorb the things we're only beginning to build. No predictions — just honest speculation about where our choices lead.
This week: what happens when accountability disappears from the most powerful systems ever built.
Get more from AI Weekly
Breaking stories as they happen. Deep dives on the topics you care about. 50+ free courses from Stanford, MIT, and more.
Deep Dives Daily Alerts Learning AIAI doesn't care about accountability. It can't. It's a system that produces outputs, and to the machine, wrecking a career and saving a life are the same event.
Fine. A hammer doesn't care either.
But the people building this thing are asking us to hand over our thinking to it. Alex Karp, CEO of Palantir, just told the next generation to quit the humanities. He has a PhD in philosophy. Jensen Huang of Nvidia has been telling kids to stop learning to code for two years. Sam Altman talks about "abundance" the way pastors talk about paradise. The pitch is theological: surrender your judgment, trust the oracle, the machine sees farther than you.
Say we do it. What are we handing our thinking to?
An entity that has already written itself out of the legal equation.
Anthropic's Consumer Terms cap total liability at the greater of six months of fees or $100. Claude Code writes bad code, burns your credits, burns more credits fixing what it broke. You pay for both. If it nukes your production database tomorrow — $100.
Microsoft went further. Copilot's Terms of Use say:
"Copilot is for entertainment purposes only. It can make mistakes, and it may not work as intended. Don't rely on Copilot for important advice. Use Copilot at your own risk."
Entertainment purposes only. The same language a psychic uses. On a tool Microsoft sells to enterprises for $30 a seat and bakes into Windows.
Gemini's terms ship "as-is," disclaim all consequential damages, and warn: "Don't rely on the Services for medical, legal, financial, or other professional advice." That covers most of what knowledge workers do.
The pattern is identical. Give us your thinking, give us your money, give us your profession. When the thing you trusted is wrong, that's on you.
Now stack Huang on top. Don't learn to code. Okay — if you can't read the output, how do you catch the hallucination? The man telling you to stop learning is selling the hardware making the mistake. The company whose model wrote the mistake owes you $100.
This is god-king logic. Surrender, trust, and if the harvest fails it's because you lacked faith. The god can't be wrong. The god can't be sued.
Against this, one counterexample just landed. The Linux kernel maintainers published a formal policy on AI code. AI can assist, but AI cannot sign off. Every line in the kernel — the OS running most of the internet — has a human name on it, fully legally responsible. "The person using these tools is responsible for the output." That's the whole policy.
The AI companies say the opposite: the AI does the work, and the human is liable anyway.
The trap: if Huang wins and nobody learns to code, the Linux policy becomes meaningless. You can't hold someone responsible for approving code they can't read. The signature becomes a ritual. A blood offering to the legal system.
If you think regulation will fix this, look at what happened when an actual liability bill showed up.
In 2024, California passed SB 1047 — safety testing, a kill switch, liability for catastrophic harms. Assembly vote: 41–9. Public support in every poll. OpenAI, Meta, Google, and a16z lobbied hard against it. Newsom vetoed it. The industry won. OpenAI's federal lobbying went from $260K in 2023 to $1.76M in 2024 — a sevenfold jump. Anthropic more than doubled.
You can have a technological singularity without accountability. Nothing in the physics requires the most powerful systems ever built to answer to anyone.
How do you force a private industry to be accountable when every incentive points the other way?
Look at tobacco. Fifty years of suppressed studies, manufactured doubt, doctors on the payroll. Millions of deaths. It took the 1998 Master Settlement — a legal mugging by 46 state attorneys general — to force them to stop. Nobody went to jail. That's what corporate accountability for civilizational harm looks like when capitalism polices itself. Fifty years late, and still no handcuffs.
The AI lobbying is already further along than Big Tobacco's ever got.
And here it stops being about code and starts being about who lives.
In Gaza, an Israeli AI system called Lavender generated a kill list of 37,000 Palestinians with a 10% error rate. Human analysts reviewed each target for about 20 seconds before authorizing a strike. Thousands of civilians died in homes flagged by a machine and rubber-stamped by a human too rushed to decide. The journalist who broke the story, Yuval Abraham, summed it up in one line: "AI-based warfare allows people to escape accountability."
It's the same companies. Palantir runs the Pentagon's Maven AI. Anthropic and OpenAI are chasing defense contracts. The philosopher telling you to stop thinking is the guy selling the targeting system.
Same structure as the $100 liability cap. Just with bodies.
That's where 2124 is headed. AI everywhere. Accountability nowhere. The people who called themselves enhanced will be the ones who signed every waiver, surrendered every judgment, and woke up in a world run by systems nobody can sue — where the code in your IDE and the drone over your head were built by the same company, licensed under the same terms, protected by the same lobbyists.
Enhanced is a marketing word for the opposite of power. The truly enhanced person is the one who kept the ability to say no. Kept the judgment. Kept the lawsuit. Kept the code. Kept the trigger.
The god-king doesn't give up the throne. You have to take it.
Nobody's even reaching.
Quick Poll
Which of these worries you most about the next hundred years?
⚔️ Killer machines, no one responsible 💼 Corporate impunity by design 🧠 A generation that cannot think 🧬 The enhanced divide 👁️ Surveillance without end ⚡ None of the abovePick one. We'll cover the top answer next week.
What do you think?
Join the conversation — share your take on this issue.
Log in to comment

AI周刊

文章目录


    扫描二维码,在手机上阅读