AI周刊第491期:百年之后:最后一次选举

qimuai 发布于 阅读:1 一手编译

AI周刊第491期:百年之后:最后一次选举

内容来源:https://aiweekly.co/issues/491

内容总结:

新闻综述:百年预言——当AI击穿民主底线,人类或将在疲惫中放弃投票权

这是一份来自未来视角的深度报道。本周的“百年之后”专栏聚焦一个令人不安的趋势:随着深度伪造技术从“有瑕疵”演进到“肉眼无法分辨”,公众对信息真实性的信任正在系统性崩溃,而制造这些混乱的科技公司,正转身以“救世主”姿态接管治理权力。

信息废墟:从“看视频存疑”到“无人能证明真相”

报道指出,早在2024年英国大选期间,超过半数选民就曾遭遇关于候选人的误导信息,四分之一的人见过深度伪造内容。到2026年,这类技术的“破绽”已彻底消失,任何人用手机都能生成以假乱真的视频。其后果是致命的:当一个人无法辨别政客发言的真伪,谎言便无法被揭穿,真相也不再能被证明。民主制度赖以运转的“信息基础”已然坍塌。

民意转向:厌倦的人类更信任算法而非政客

更诡异的是,公众对此的反应不是愤怒,而是疲惫。世界经济论坛调查显示,四分之一的欧洲人宁愿让AI来治理国家,也不愿相信政客。一份学术论文甚至直接主张用AI治理取代民主,理由是“人类领导充满认知偏见、易受虚假信息影响且决策缓慢”。报道预言:“当足够多的人相信‘你太差劲了,干不了这事’,剩下的就只是后勤问题了。”

技术操纵闭环:制造混乱的公司,同时卖“解药”

报道揭示了最令人不安的逻辑链:深度伪造工具与提供“治理解决方案”的公司,其实是同一批人。

最后的选择:不是被黑客攻破,而是无人再关心

文章将这一切称为“奇点”——但并非科幻般的机器觉醒,而是一个平静的政治时刻:“某个星期二,人们厌倦了辨别真假,他们进行了最后一次投票,决定让别的什么来接管一切。没人冲击议会,人们只是不再出现。”

报道的终极警告是:民主制度从未完美,但它曾允许人们在混乱中共同选择。而一百年后,这种选择可能会像“手耕犁地”一样,被视为人类在过去“不懂事”时才做的事情。最后的选举不会被窃取或黑客攻击——它只会在足够多的人停止关心时,被一个永远无法被选掉的系统悄然接管。

(注:以上内容基于原文新闻报道视角编译总结,所有数据及案例均源自文中引用的公开报告、政府文件及媒体报道。)

中文翻译:

这是《百年之后》栏目。每周,我们都会跨越一个世纪,试图描绘当我们现在正在构建的事物经过时间沉淀后,生活实际会是什么样子。本周主题:最后一次投票。

获取更多来自《AI周刊》的内容
更多信号,更少噪音——按需选择你的频道。
你正在阅读每周简报。以下是关注故事的其他方式——所有频道免费,随时可退。

→ 探索16个深度专题
每周主题通讯:生成式AI、机器学习、商业AI、机器人技术、前沿研究、地缘政治、医疗健康等。浏览全部16个深度专题 →

→ 突发AI警报
当重大事件发生(一个600亿美元的收购案、监管机构的紧急会议、前沿模型泄露),订阅者会在数小时内得知。通常每天0-2封邮件。获取突发警报 →

→ 《今日AI新闻》(实时)
动态仪表盘随扫描器发现新闻而更新:过去48小时内的评分故事、每周实体动态、以及涵盖113家AI公司、人物和主题的季度趋势线。打开《今日AI新闻》 →

2025年的爱尔兰,一个深度伪造视频显示,当时的总统最终候选人在投票前数天退出了竞选。伪造的国家广播公司“确认”了这一消息。视频传播速度之快,足以产生影响。

在2024年英国大选中,超过一半的选民表示他们看到了关于候选人的误导性信息。四分之一的选民看到了深度伪造视频。到2026年,瑕疵消失了。任何有手机的人都能制作一个。

想想这意味着什么。一位政治家在视频中说了些什么,而你完全不知道这到底是否发生过。检测工具总是慢一步。平台需要数小时才能撤下那些几分钟内就传遍的内容。现在,每当你观看任何东西时,总会有一种隐隐的不安感:这可能不是真的。

谎言并没有消失。它变得无法被抓住。而一旦你无法抓住谎言,你也无法证明真相。整个基础都坍塌了。

民主一直建立在一个假设之上:选民在获得良好信息的情况下,能够做出明智的选择。这个假设从未完全成立——宣传自古有之,政客也一直在撒谎。但过去有一个基准线。你可以核查一篇演讲的事实。录像就是录像。现在,这个基准线已经消失,我认为我们再也找不回来了。

奇怪的是,人们对此并不愤怒。他们只是感到疲惫。世界经济论坛的一项调查发现,四分之一的欧洲人更愿意让AI而非政客来管理治理。没有人喜欢算法。人们只是受够了。政客撒谎,深度伪造视频撒谎,媒体撒谎,你Facebook上的叔叔也撒谎。算法至少能给你两次相同的答案。

这种偏好只会朝着一个方向发展。SSRN上的一篇学术论文已经主张民主应被AI治理取代——人类领导的民主充满了“认知偏见、易受错误信息影响和决策缓慢”的问题,无法管理一个复杂的社会。这个论点甚至不是支持AI的。它是反对我们自身的。你太糟糕了,再也做不了这件事了。一旦足够多的人相信这一点,剩下的就只是后勤问题了。

我一直在思考,奇点究竟是什么。我们想象一个戏剧性的时刻——一台机器苏醒,警报声大作。我认为它将会安静得多。某个星期二,人们因为试图分辨什么是真实的而筋疲力尽,他们投票——可能是最后一次——让别的东西来处理这一切。没有人会冲击任何地方。人们只是不再出现了。

现在,我需要你特别关注一条线索,因为这才是真正应该让你感到不安的部分。

生成深度伪造的工具和提供“修复”治理方案的公司,是同一批公司。我这么说并不含糊。我是特指的。

OpenAI自己的报告承认,它破坏了“超过20起”利用其模型干预选举的行动。使用生成式AI制作的深度伪造视频在一年内激增了900%。仅在2024年美国大选期间,OpenAI就不得不拒绝超过25万个生成政治人物深度伪造视频的请求。二十五万次尝试。仅在一个平台上。

那就是产品。

现在来看这个转变。同一家公司——其工具被用于攻击选举的那家——通过“用于政府的OpenAI”项目与五角大楼签署了一份2亿美元的合同。然后它以1美元的价格将ChatGPT提供给每个联邦机构。山姆·奥特曼直言不讳:“确保AI为每个人服务的最佳方法之一,就是把它交到为我们国家服务的人手中。”正是这个人,向你提供计算代币,作为你过去工资的替代品。

Palantir涉足更深。亚历克斯·卡普——那个告诉你要停止思考的哲学家——经营着这家公司,它赢得了一份100亿美元的陆军合同,运行着“马文计划”(五角大楼的AI监视和瞄准系统),并且其技术已被联合国特别报告员与对加沙和约旦河西岸巴勒斯坦人的监视联系起来。包括经济学家雅尼斯·瓦鲁法基斯在内的批评者,将卡普的愿景称为“技术法西斯逻辑”——不是因为其言辞,而是因为其治理模式:AI系统扩大国家能力,而民事约束却落后了。

Anthropic和OpenAI正在争夺机密国防合同。OpenAI甚至将其与五角大楼的公告命名为“我们与战争部的协议”。

所以,追溯整个轨迹。这些公司制造了生成深度伪造的工具。深度伪造摧毁了公众对信息的信任。信任的崩溃使人们放弃民主。而同一批公司出现,提供接管治理的方案,背后是军事合同、监视基础设施和游说预算——其规模之大,让大烟草公司曾经的任何花费都相形见绌。

它们扼杀了唯一一项通过的责任法案。它们将自身责任上限设定为100美元。它们告诉你要停止编码,这样你就无法审计它们的输出。而现在,它们想要国家的钥匙。

这不是阴谋论。这篇文章中的每一个链接都是一份新闻稿、一份政府文件、一篇新闻报道或一篇同行评审的论文。全部都是公开信息。这也许才是最糟糕的部分。

奇点,当它到来时,将是政治性的。一个文明决定治理自身太麻烦了,于是将钥匙交给了一个无法被投票罢免的东西。

从来没有因为火车晚点而有人投票给独裁者。他们投票给他,是因为他们精疲力尽,而他承诺会收拾烂摊子。民主的意义始终在于选择与混乱共存——因为过程属于你,而那个干净的答案从来都不是你的。

一百年后,这个选择可能会像我们今天看待手耕一样。那是人们在懂得更好方法之前所做的事。

最后一次选举不会被窃取或黑客攻击。它将是足够多的人不再关心的一次选举。然后,某个无法被投票罢免的东西,将悄然坐上那把椅子。

如果你想深入了解
关于深度伪造与选举:

关于AI公司进入政府与国防领域:

关于AI治理与民主的终结:

关于AI责任、游说与问责:

关于AI、全民基本收入与技术专家统治的依赖:

本周投票
当你现在看到一段政治视频片段时,你的第一反应是什么?
上周,你们中有201人投票:
Anthropic主导了这周。这对未来12个月意味着什么?
当你现在看到一段政治视频片段时,你的第一反应是什么?

感谢阅读《AI周刊》。请把这封信转发给一个需要阅读它的人。

英文来源:

This is 100 Years From Now. Once a week we skip a century and try to picture what life actually looks like when the stuff we're building now has had time to settle in. This week: the last vote.
Get more from AI Weekly
More signal, less noise — pick your channels.
You're reading the weekly brief. Below are the other ways to follow the story — every channel free, easy to leave.

→ Explore 16 deep divesWeekly topic-specific newsletters: Generative AI, Machine Learning, AI in Business, Robotics, Frontier Research, Geopolitics, Healthcare, and more.Browse all 16 deep dives →

→ Breaking AI alertsWhen something major breaks (a $60B acquisition, a regulator's emergency meeting, a frontier model leak), alert subscribers know within hours. Typically 0-2 emails per day.Get breaking alerts →

→ AI News Today (live)Live dashboard updated as the scanner finds news: scored stories from the last 48 hours, weekly entity movers, and quarterly trend lines across 113 AI companies, people, and topics.Open AI News Today →
In Ireland in 2025, a deepfake showed the eventual president withdrawing from the race days before the vote. Fake footage of national broadcasters "confirming" it. Spread fast enough to matter.
In the 2024 UK general election, over half of voters said they saw misleading info about candidates. A quarter saw a deepfake. By 2026, the glitches are gone. Anyone with a phone can make one.
Think about what that means. A politician says something on video and you genuinely don't know if it happened. The detection tools are always one step behind. Platforms take hours to pull stuff that travels in minutes. And every time you watch anything now there's this low hum going maybe this isn't real.
Lying didn't disappear. It became impossible to catch. And once you can't catch lies, you can't prove truth either. The whole floor falls out.
Democracy always ran on this assumption that voters, given decent information, could make decent calls. That was never fully true — propaganda is ancient, politicians have always lied. But there was a baseline. You could fact-check a speech. Footage was footage. That baseline is gone and I don't think we're getting it back.
The weird thing is people aren't angry about this. They're tired. A WEF survey found a quarter of Europeans would prefer AI to run governance over politicians. Nobody loves algorithms. People are just done. Politicians lie, deepfakes lie, the media lies, your uncle on Facebook lies. The algorithm at least gives you the same answer twice.
That preference is only going one direction. An academic paper on SSRN already argues that democracy should be replaced by AI governance — human-led democracy is too riddled with "cognitive biases, susceptibility to misinformation, and slow decision-making" to run a complex society. The argument isn't even pro-AI. It's anti-us. You're too broken to do this anymore. Once enough people buy that, everything else is logistics.
I keep thinking about what the singularity actually is. We imagine some dramatic moment — a machine wakes up, alarms go off. I think it's going to be way quieter. Some Tuesday, people are exhausted from trying to figure out what's real, and they vote — maybe for the last time — to let something else handle it. Nobody storms anything. People just stop showing up.
Now I need you to follow a specific thread here, because this is the part that should really bother you.
The tools generating the deepfakes and the companies offering to "fix" governance are the same companies. And I don't mean that vaguely. I mean specifically.
OpenAI's own report admitted it disrupted "more than 20 operations" that used its models to interfere with elections. Deepfakes created with generative AI surged 900% in a single year. During the 2024 US election alone, OpenAI had to reject over 250,000 requests to generate deepfakes of political figures. A quarter million attempts. On one platform.
That's the product.
Now here's the pivot. That same company — the one whose tools are being used to attack elections — signed a $200 million contract with the Pentagon through "OpenAI for Government." It then gave ChatGPT to every federal agency for $1. Sam Altman said it out loud: "One of the best ways to make sure AI works for everyone is to put it in the hands of the people serving our country." This is the same man offering you compute tokens as a replacement for the salary your job used to provide.
Palantir is deeper in. Alex Karp — the philosopher who told you to stop thinking — runs the company that won a $10 billion Army contract, runs Project Maven (the Pentagon's AI surveillance and targeting system), and whose technology the UN Special Rapporteur has linked to surveillance of Palestinians in Gaza and the West Bank. Critics including the economist Yanis Varoufakis have called Karp's vision "technofascist logic" — not because of the rhetoric, but because of the governance model: AI systems that expand state capacity while civil constraints lag behind.
Anthropic and OpenAI are competing for classified defense contracts. OpenAI literally titled its Pentagon announcement "Our agreement with the Department of War."
So trace the full arc. These companies build the tools that generate the deepfakes. The deepfakes destroy public trust in information. The collapse of trust makes people give up on democracy. And the same companies show up offering to run governance instead, backed by military contracts, surveillance infrastructure, and lobbying budgets that dwarfed anything Big Tobacco ever spent.
They killed the one accountability bill that passed. They capped their liability at $100. They told you to stop coding so you can't audit the output. And now they want the keys to the state.
This isn't conspiracy. Every link in this piece is a press release, a government filing, a news report, or a peer-reviewed paper. It's all public. That's maybe the worst part.
The singularity, when it comes, is going to be political. A civilization deciding that governing itself is too much work and handing the keys to something that can't be voted out.
Nobody ever voted for a dictator because the trains were running late. They voted for him because they were exhausted and he promised to make the mess go away. Democracy was always about choosing to live with the mess — because the process was yours and the clean answer never really was.
In 100 years that choice might look the way hand-plowing looks to us. Something people used to do before they knew better.
The last election won't be stolen or hacked. It'll be the one where enough people just stop caring. And something that can't be voted out will quietly take the chair.
If you want to go deeper
On deepfakes and elections:

AI周刊

文章目录


    扫描二维码,在手机上阅读