在民主制度中运用人工智能以强化其效能的蓝图

内容来源:https://www.technologyreview.com/2026/05/05/1136843/ai-democracy-blueprint/
内容总结:
人工智能重塑民主:技术设计与制度变革关乎未来
每隔几个世纪,信息传播方式的变革便会推动社会治理模式的转型。如今,人工智能正成为人们形成观点、参与民主决策的核心界面。这一转变若不加引导,可能进一步加剧美国等民主国家本就脆弱的制度压力;但若能妥善设计,也有望解决公民参与度低、政治极化加剧等长期顽疾。
当前,AI已在信息认知层面产生影响:越来越多的人依赖AI助手获取事实、判断真伪、决定信任对象。下一代AI系统将更具权威性地整合与呈现信息,这意味着控制模型输出的人将深刻影响公众对候选人、政策乃至公共人物的看法。
更深刻的变化发生在行动层面:个人AI代理将不仅改变信息接收方式,更将代表用户进行研究、起草文件、筛选议题、甚至就如何投票或回应政府通知提供建议。这些代理实际上正在介入个人与治理机构之间的关系。
而当数以百万计的AI代理与人类在同一公共论坛中互动时,问题将从个体层面上升至系统层面。即便每个代理本身无偏见,其集体互动仍可能产生无人预期的系统性偏差。个性化代理构建的是一个个内部自洽的“私人世界”,而非民主所需的共享公共领域。
面对这一趋势,专家建议从三个层面进行应对:在信息层面,AI公司必须确保输出真实性,并利用AI辅助事实核查降低极化(近期实验显示,AI撰写的事实核查笔记获得了跨党派认可);在代理层面,需要建立评估机制,确保AI代理忠实代表用户利益,既不隐瞒不利信息,也不一味迎合用户原有偏见;在制度层面,政策制定者应加快利用AI提升治理的回应性与正当性,并在公众输入流程中从源头嵌入人类与AI代理的身份验证。
归根结底,我们需要构建新一代民主基础设施——既包括技术手段,也包括制度设计。在一个影响如此深远的领域,不为民主结果而设计,就等于为其他结果而设计。而历史上不受问责的权力所导向的结果,往往并不令人乐观。
中文翻译:
利用人工智能加强民主的蓝图
人工智能正在改变成为民主公民的意义。以下是我们如何将其用于善途的方法。
每隔几个世纪,信息流动方式的改变就会重塑社会的自我治理模式。印刷术普及了本土语言文字,助力了宗教改革,并最终催生了代议制政府。电报使得管理像美国这样庞大的国家成为可能,加速了现代官僚制国家的发展。广播媒体创造了共享的国家受众,进而推动了大众民主。
如今,我们正处于另一场此类变革的早期阶段。人工智能正以许多人尚未意识到的速度,成为我们形成信念和参与民主自治的主要界面。如果不加控制,这一转变可能会进一步加剧美国本已脆弱的制度压力。但它也可能有助于解决长期存在的问题,例如公民参与度滞后和日益加深的两极分化。接下来会发生什么,取决于那些已经在做出的设计选择——无论我们是否意识到这一点。
首先,从所谓的认知层面说起——我们是如何了解事物的。人们越来越依赖人工智能来判断什么是真实的、正在发生什么,以及该信任谁。搜索已经在很大程度上由人工智能中介。下一代人工智能助手将整合信息、构建框架,并以权威的方式呈现。对越来越多的人来说,询问人工智能将成为对候选人、政策或公众人物形成看法的默认方式。因此,控制这些模型说什么的人,对人们相信什么拥有越来越大的影响力。
技术一直塑造着公民与信息互动的方式。但很快,一个新的问题将以个人人工智能代理的形式出现,它不仅会改变人们接收信息的方式,还会改变他们如何据此行动。这些系统将进行研究、起草通信、突出关注议题,并代表用户进行游说。它们将为诸如如何在投票议案中投票、哪些组织值得支持、或如何回应政府通知等决策提供信息。在某种意义上,它们将开始中介个人与其治理制度之间的关系。
我们已经从社交媒体上看到,当算法优化追求参与度而非理解力时会发生什么。平台无需有明确的政治议程就能产生两极分化和激进化。一个了解你的偏好和焦虑、被塑造成让你持续参与的代理,会带来同样的风险。而且在这种情况下,风险可能更难察觉,因为代理将自己呈现为你的拥护者。它为你发言、代表你行动,并可能恰恰通过那种亲密感赢得信任。
现在把视角拉远到集体层面。人工智能代理和人类可能很快就能参与相同的论坛,在那里可能无法区分它们。即使每个个体人工智能代理都设计良好且与其用户的利益一致,数百万个代理的互动也可能产生任何个体都不想要或未选择的结果。例如,研究表明,不表现出个体偏见的代理仍然可能大规模产生集体偏见。且不论代理之间相互做什么,单说它们为用户做了什么。在一个每个人都拥有根据其现有观点定制的个性化代理的公共领域,从整体上看,根本就不是一个公共领域。它是一系列私人世界的集合,每个世界内部逻辑自洽,但整体上却不利于民主所需的共同商议。
综合来看,这三个转变——在我们如何认知、如何行动以及如何参与集体治理方面——构成了公民身份本质的根本性变化。在不久的将来,人们将通过人工智能过滤形成政治观点,通过人工智能代理行使公民能动性,并参与那些本身由数百万此类代理的互动所塑造的制度和公共讨论。
今天的民主还没有为此做好准备。我们的制度是为这样一个世界设计的:权力被公开行使,信息传播速度慢到足以被质疑,现实感即便不完美也更为共享。所有这一切在生成式人工智能出现之前就已开始瓦解。然而,这未必是一个衰退的故事。避免这一结果要求我们为更好的方向进行设计。
在信息层面,人工智能公司必须加强现有努力,以确保模型输出的真实性。他们还应该探索一些有前景的早期发现,即人工智能模型有助于减少两极分化。最近在X平台上进行的一项关于人工智能生成事实核查的现场评估发现,具有不同政治观点的用户认为人工智能撰写的注释比人工撰写的更有帮助。该论文尚待同行评审,但这是一个可能具有革命性的发现:人工智能辅助的事实核查或许能够实现那种大多数人工努力都无法企及的跨党派可信度。对模型如何做出这些断言以及在此过程中如何优先选择信息源有更深入的理解和透明度,将有助于进一步建立公众信任。
在代理层面,我们需要评估人工智能代理是否忠实代表其用户的方法。一个代理绝不能有自己的议程,也不能歪曲其用户的观点——这在用户可能未明确陈述任何偏好的领域是一个技术上令人生畏的要求。但忠实代表也不能成为动机性推理的帮凶。一个拒绝呈现令人不安的信息、使用户从不质疑先前信念或未能根据用户改变心意进行调整的代理,并非在为用户的最佳利益行事。
最后,在制度层面,政策制定者应抓紧利用人工智能的潜力,使治理更具回应性和合法性。多个州和地方已经在使用人工智能中介平台进行大规模民主协商,这是基于研究表明人工智能调解人可以帮助公民找到共同点。随着代理越来越多地成为公共意见征询过程中的常见参与者——并且已有证据显示机器人正在扭曲这些过程——对人类及其代理代理人的身份验证必须从一开始就内置其中。
所需的是新一代民主基础设施,无论是技术层面还是制度层面,为当下真实存在的世界而建。在如此重要的领域,倘若不为民主成果进行设计,那就等于在为别的东西设计。而不可问责权力史留下的乐观余地,并不足以让人对那个“别的东西”往往会是什么抱有多大的希望。
安德鲁·索罗塔和乔什·亨德勒在埃里克·施密特办公室领导人工智能与民主方面的工作。
深度探索
人工智能
OpenAI正全力以赴构建一个完全自动化的研究者
与OpenAI首席科学家雅库布·帕乔茨基就公司新的大挑战和人工智能未来进行的独家对话。
宝可梦GO如何让配送机器人获得对世界的精准视角
独家:Niantic的人工智能衍生公司正在利用从玩家众包收集的300亿张城市地标图像训练一种新的世界模型。
想了解人工智能的当前状况?看看这些图表。
根据斯坦福2026年人工智能指数,人工智能在飞速发展,而我们正在努力跟上步伐。
这家初创公司想改变数学家做数学的方式
Axiom Math正在免费提供一个强大的人工智能工具。但它能否像公司希望的那样加快研究速度,仍有待观察。
保持联系
获取来自
《麻省理工科技评论》的最新资讯
发现特别优惠、热门故事、即将举行的活动等更多内容。
英文来源:
A blueprint for using AI to strengthen democracy
AI is changing what it means to be a democratic citizen. Here’s how we can harness it for good.
Every few centuries, changes in how information moves reshape how societies govern themselves. The printing press spread vernacular literacy, helping give rise to the Reformation and, eventually, representative government. The telegraph made it possible to administer vast nations like the US, accelerating the growth of the modern bureaucratic state. Broadcast media created shared national audiences, which in turn fueled mass democracy.
We are now in the early stages of another such shift. Faster than many realize, AI is becoming the primary interface through which we form beliefs and participate in democratic self-governance. If left unchecked, this shift could further strain America’s already fragile institutions. But it could also help address long-standing problems, like lagging civic engagement and deepening polarization. What happens next depends on design choices that are already being made, whether we know it or not.
Start with what might be called the epistemic layer—how we come to know things. People are increasingly relying on AI to know what is true, what is happening, and whom to trust. Search is already substantially AI-mediated. The next generation of AI assistants will synthesize information, frame it, and present it with authority. For a growing number of people, asking an AI will become the default way to form views on a candidate, a policy, or a public figure. Whoever controls what these models say therefore has increasing influence over what people believe.
Technology has always shaped the way citizens interact with information. But a new problem will soon arise in the form of personal AI agents, which can change not only how people receive information but how they act on it. These systems will conduct research, draft communications, highlight causes, and lobby on a user’s behalf. They will inform decisions such as how to vote on a ballot measure, which organizations are worth supporting, or how to respond to a government notice. They will, in a meaningful sense, begin to mediate the relationship between individuals and the institutions that govern them.
We’ve already seen with social media what happens when algorithms optimize for engagement over understanding. Platforms do not need to have an explicit political agenda to produce polarization and radicalization. An agent that knows your preferences and your anxieties—one shaped to keep you engaged—poses the same risks. And in this case the risks may be even more difficult to detect, because an agent presents itself as your advocate. It speaks for you, acts on your behalf, and may earn trust precisely through that intimacy.
Now zoom out to the collective. AI agents and humans could soon participate in the same forums, where it may be impossible to tell them apart. Even if every individual AI agent were well-designed and aligned with its user's interests, the interactions of millions of agents could produce outcomes that no individual wanted or chose. For example, research shows that agents displaying no individual bias can still generate collective biases at scale. And setting aside what agents do to each other, there is what they do for their users. A public sphere in which everyone has a personalized agent attuned to their existing views is not, in aggregate, a public sphere at all. It is a collection of private worlds, each internally coherent but collectively inhospitable to the kind of shared deliberation that democracy requires.
Taken together, these three transformations—in how we know, how we act, and how we engage in collective governance—amount to a fundamental change in the texture of citizenship. In the near future, people will form their political views through AI filters, exercise their civic agency through AI agents, and participate in institutions and public discussions that are themselves shaped by the interactions of millions of such agents.
Today’s democracy is not ready for this. Our institutions were designed for a world in which power was exercised visibly, information traveled slowly enough to be contested, and reality felt more shared, if imperfectly. All of this was already fraying long before generative AI arrived. And yet this need not be a story of decline. Avoiding that outcome requires us to design for something better.
On the informational layer, AI companies must ramp up existing efforts to ensure that models’ outputs are truthful. They should also explore some promising early findings that AI models can help reduce polarization. A recent field evaluation of AI-generated fact checks on X found that people with a variety of political viewpoints deemed AI-written notes more helpful than human-written ones. The paper is yet to be peer-reviewed, but that is a potentially revolutionary finding: AI-assisted fact-checking may be able to achieve the kind of cross-partisan credibility that has eluded most manual human efforts. Greater understanding of and transparency about how models make these assertions and prioritize sources in the process could help build further public trust.
On the agentic layer, we need ways to evaluate whether AI agents faithfully represent their users. An agent must never have an agenda of its own or misrepresent its user’s views—a technically daunting requirement in domains where users may have not explicitly stated any preferences. But faithful representation also cannot become an accessory to motivated reasoning. An agent that refuses to present uncomfortable information, that shields its user from ever questioning prior beliefs or fails to adjust to a change of heart, is not acting in the person’s best interest.
Finally, on the institutional level, policymakers should hurry to harness AI’s potential to make governance more responsive and legitimate. Several states and localities are already using AI-mediated platforms to conduct democratic deliberation at scale, building on research showing that AI mediators can help citizens find common ground. As agents become increasingly common participants in public input processes—and there is already evidence that bots are skewing those processes—identity verification for both humans and their agentic proxies must be built in from the start.
What is needed is a new generation of democratic infrastructure, technological and institutional, built for the world that is actually here. Failing to design for democratic outcomes, in a domain this consequential, means designing for something else. And the history of unaccountable power does not leave much room for optimism about what that something else tends to be.
Andrew Sorota and Josh Hendler lead work on AI and democracy at the Office of Eric Schmidt.
Deep Dive
Artificial intelligence
OpenAI is throwing everything into building a fully automated researcher
An exclusive conversation with OpenAI’s chief scientist, Jakub Pachocki, about his firm's new grand challenge and the future of AI.
How Pokémon Go is giving delivery robots an inch-perfect view of the world
Exclusive: Niantic's AI spinout is training a new world model using 30 billion images of urban landmarks crowdsourced from players.
Want to understand the current state of AI? Check out these charts.
According to Stanford’s 2026 AI Index, AI is sprinting, and we’re struggling to keep up.
This startup wants to change how mathematicians do math
Axiom Math is giving away a powerful new AI tool. But it remains to be seen if it speeds up research as much as the company hopes.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.