“无事发生”已成往事

内容来源:https://nav.al/over
内容总结:
《“天下无事”的时代已终结》
在最新一期播客中,硅谷知名投资人Naval与听众分享了他对当前科技、社会与未来的深刻洞察。核心观点是:世界正以前所未有的速度变化,那句“什么都不会发生”的流行梗已经过时。
AI赋能初创企业:去中心化与“智能互联”
Naval介绍了他所在公司Impossible的组织架构——采用“轮辐式”模式,CEO作为中心节点,团队保持极扁平结构,甚至不使用Slack等协作工具。他坦言自己“讨厌组织管理”,认为大公司容易滋生政治内耗。他推崇“全互联图”模式:每位员工都足够聪明,能自主找到需要协作的人,无法适应这种模式的人更适合层级分明的传统企业。
AI虽未直接用于内部沟通,但已潜移默化地发挥作用:能自动解读他人代码、总结论文、在代码库中识别专家、甚至从邮件和供应商文件中提取数据并实时生成甘特图。Naval认为,AI让硬件、软件和AI工程师之间互相“借用”20%-30%的能力,降低了协作门槛。
时代变了:从“天下无事”到“有趣的时代”
谈及当前最关心的问题,Naval认为AI领域正被2-4家巨头主导(若算上英伟达则为5家)。他抛出关键疑问:这会走向垄断还是碎片化?开源是否有机会?分布式训练是否可能?目前主流观点倾向于集中化训练,但他认为若这一判断错误,将是一次有趣的“逆向押注”。
他引用那句古老的“中国诅咒”:“愿你活在有趣的时代。”在他看来,疫情后世界加速变迁——地缘、经济、科技全面提速。风投被迫转向硬件、火箭、无人机、AI等“科幻级”技术,但科幻工程师和作者严重短缺。
无人机:暴力民主化与“个体化相互毁灭”
Naval认为,无人机尚未发挥全部潜力。他指出,无人机的崛起将彻底改变暴力结构——将“相互确保摧毁”的逻辑下沉到个人层面。过去,步枪催生了现代民族国家,核武器造就了9个主权大国;而无人机让任何个体都可能对他人构成致命威胁。这种“暴力民主化”将重塑社会架构。
生物威胁:AI让“恶人”更容易,但好药被体制卡住
Naval同时警告,AI正在降低生物武器的准入门槛。过去需要顶尖专家和实验室才能制造的病毒,未来可能被更多人获取。尽管同样可用AI研发疫苗和防护措施,但“好人”的研究往往被严格管制束缚——尤其是医疗领域的繁文缛节。他呼吁在未雨绸缪的同时,也应反思:即使在新冠危机中,疫苗研发也因“生物伦理”和官僚主义而严重拖沓。
硬件复兴:AI接口让“软件短板”不再是问题
Naval预测硬件将迎来黄金时代。过去硬件虽强,但软件糟糕导致设备体验不佳。如今,AI智能体可直接与硬件交互,用户不再需要定制软件。例如,一款安防摄像头或儿童玩具,可能只需一个AI助手就能编程或控制。这背后,中国和英伟达等硬件巨头大力推动开源,本质上是“将软件商品化,解锁更多硬件需求”。
乐观需要创造力,而悲观太容易
面对未来,Naval保持理性乐观。他指出,人们更容易想象“末日场景”,因为只需要否定现有事物;而积极想象需要创造力——200年前没人能预测今天90%的工作岗位。虽然核战争、大流行病等灾难确实可能发生,但悲观毫无助益。他呼吁:要主动培养和奖励“非理性乐观”,因为这是走出困局的唯一路径。“那些只会喊‘完了完了’的人,你绝不想和他们同处一个战壕。”
中文翻译:
《“一切如常”的时代已经结束》
纳瓦尔: 你正在收听的是纳瓦尔播客。我是尼维。这期节目没有固定主题——将是一锅“大杂烩”。
完全互联的初创公司
尼维: 纳瓦尔,你在现在的公司Impossible里是如何利用人工智能来改变管理方式的?还是说,你们公司规模太小,团队成员都是一群聪明的独立贡献者,以至于AI对实际运营没产生什么影响?
纳瓦尔: 更像是后者。我们是“中心辐射式”架构。我的联合创始人是CEO,所有人都向他汇报。他就像唯一的产品经理,脑子里装着所有事情,试图把这项看似不可能的任务整合到一起。每个人通过他来对接,大家也都相当聪明。
我们保持非常扁平的结构。我们鼓励人们直接相互沟通。我们甚至不用Slack,你大概能明白我的意思了。所以,我们并没有在内部明确地把AI作为一种沟通方式来使用。但无形中,AI仍然很有帮助。我们不像Square。我知道杰克·多西已经围绕AI重组了Square,也许Shopify的托比也在这么做。有些人对组织管理非常在行,他们会做这类实验。
我从来就不擅长组织管理。我实际上很讨厌组织管理,因为我讨厌组织。我讨厌大型群体。我觉得事情很难做成,你接触的往往不是最优秀、最聪明的人,而且总有政治斗争。所以我更喜欢保持小团队。
我们依靠人们独立运作,并根据需要相互沟通。就像我说的,我们甚至不用Slack。我们不用任何项目管理软件。我想唯一用的就是GitHub。
然后,当人们想互相交流时,他们就发短信。真的——他们一对一地聊。有时候这很混乱,他们得搞清楚该去找谁。但这本身也是技能的一部分。
这有点像计算机网络。为了效率,你如何组织一个网络?因为在某个节点,沟通开销会变得非常高。传统的答案是层级制度。它是一个树状系统。就像顶端有一个人——CEO——然后下面有一堆副总裁或高级副总裁向他们汇报。再下面又有一堆副总裁,然后是中層管理者,依此类推,这样能让事情井井有条,朝着一个方向推进。
但这令人窒息。有大量政治斗争。你不能和比你低两三个层级的人直接对话,除非你像埃隆·马斯克或布莱恩·切斯基那样的“创始人模式”,然后这种CEO能和工程师直接对话的举动,会被当作某种了不起的成就来歌颂。你听得出来我是在讽刺。我就是觉得那是一种糟糕的运作方式,但这是规模带来的要求,而我们还没到那个规模,所以我不喜欢。
相反,我喜欢“全连接图”。这听起来很疯狂。全连接图是指每个人都可以和任何人交流,同时有一个轻量的中心辐射式结构,中间有一个人试图把所有事情都记在脑子里。
在网络中,全连接图的关键在于每个节点都必须非常智能。所以这就是你要做的。你雇佣那些非常聪明、能够在全连接图中运作的人。如果他们无法找到需要沟通的人来解决特定问题——或者他们无法与他人合作或沟通——那么他们就不适合这种组织,他们应该去找一个层级分明的组织,在那里他们会更舒服。
所以我们并不真正依赖任何工具。
你不再需要明确的内部网络了
纳瓦尔: 现在,AI在组织内部仍然是一种非常有用的无形工具,我可以举两个例子,虽然还有很多。
第一个例子是,如果你在阅读别人写的代码,代码非常复杂,你可以让AI帮你阅读并给你一个摘要。对于论文:AI可以阅读别人的论文并给你一个摘要。它甚至可以遍历整个代码库,告诉你组织里谁是某个主题的专家,并引导你去找他们。
所以AI可以为你做很多这类挖掘工作。你不再那么需要明确的内部网络了。你不再需要明确地标记各种事情,因为AI能搞清楚你的位置。
你甚至可以把AI应用到代码库和设计图上。比如,你有硬件设计图纸,可以让AI去分析。如果你有供应商和经销商,你可以让AI去分析存放所有与供应商和经销商相关文档的数据库或文件夹。
如果你愿意,你甚至可以让AI分析公司的电子邮件,然后问:“我们现在进展如何?距离实际发货还有多远?根据你对我们目前预估和时间线的判断,给我画一张甘特图,看看谁落后了,谁超前了,哪个部门缺乏资源。”
AI可以持续地为你做这种数据分析、挖掘和报告——按需生成报告。你不再需要特定的图表、仪表盘和业务集成系统。你可以让AI实时地重新创建这些。你可能不想每次都这样做,因为可能会太慢,但你可以让它按需构建这些仪表盘,并让它按需更新它们。所以,这是一件大事。
另一件事是,传统上,一个公司里会有硬件人员——在我们这样的公司,有硬件人员、软件人员和AI人员——他们通常不会去做彼此的工作。但现在有了AI,他们至少能完成彼此20%到30%的工作。这让他们之间的衔接变得更容易一些。
例如,AI人员如果需要测试某些东西,他们可以自己创建软件支撑环境。这可能不适合生产部署,但总比干坐着等一个软件人员过来为你写一些定制代码要好。
同样,硬件人员也可以写一点软件来启动一个新的硬件设备,否则他们可能就要等软件人员。所以,AI让每个人都能做一些跨领域的事情。这让他们变得更加通才化。而变得更加通才化,意味着你与其他人有更好的接触点。
你不一定需要让别人为你写一个明确的API才能使用他们的代码。你可以让AI去发现一个API,或者创建自己的API,或者你可以绕过AI,直接在你想要的任何层级(无论是数据库还是代码库内部)进行连接。所以,它自然是一个力量倍增器,但我们并没有明确地利用它。
愿你活在有趣的时代
尼维: 你现在正试图弄清楚什么?
我这么问的原因是,你很少能看到聪明人在思考过程中的工作成果。我的一个执念就是试图挖掘聪明人的秘密和内心想法。
纳瓦尔: 世界和几年前大不相同了。有两家,也许是四家公司主导着AI领域——如果把英伟达的硬件也算上,那就是五家。问题是,“这是稳定的格局吗?”
它会变成一种大宗商品业务,还是垄断业务,或者是寡头业务?它会在某个点达到顶峰吗?他们会用完数据,模型停止改进吗?还是我们会一路走向通用人工智能(AGI)?
当然,实验室内部的人是相信AGI的,他们认为所有价值最终都会消失在AI实验室里。最终结果会不会比“七巨头”的世界更加集中,只剩下“两大巨头”或“一大巨头”?
还是说它会以某种方式碎片化?开源真的有机会吗?还是说人们总是想要最聪明的模型?为了得到它,他们会放弃隐私、放弃开源,然后在云上付费?
所以我认为这些都是巨大的问题。巨大的。这些都是足以撼动世界的问题,但我不知道答案。
你能以分布式的方式训练AI吗?分布式训练是否可能,还是这些东西会越来越集中?我认为现在的传统观点是走向集中式训练:两到四家公司主导,数据中心和电力是限制因素,所有人都在朝这个方向冲刺。
但如果这是错的呢?那会是一个有趣的逆向押注。但我还没有看到证据。我认为目前关于AI这一部分的传统观点很可能是正确的。
至于AGI,我不知道。我不想做未来学家。当然,前沿实验室里的人相信它。他们相信已经有一段时间了。我所看到的AI具有“锯齿形智能”。它(目前)在多模态推理方面也相当糟糕。我不认为它有一个好的世界模型,尽管现在涌现出很多“世界模型”公司。
虽然我认为他们混淆了“看起来像世界的模型”——人们会说:“哦,那是一个世界模型,因为它看起来像是你在生成一个看起来像世界的东西,我可以在里面漫游。”
那不是世界模型。世界模型是指你有一个智能体,它的大脑内部有一个世界的模型,这使它能够采取行动,然后预测其行动的后果,然后根据发生的情况调整自己的行为——无论它是否学到了东西——这就形成了一个强化学习循环。那才是世界模型。
所以我们看到世界模型公司正在涌现。我想杨立昆最近用JEPA做的一个项目很有名。所以我们将会看到新的模型、新的智能体、新的智能形式。我们会达到AGI吗?我不知道。这也是每个人都在试图弄清楚的事情,对吧?
但这个世界正在变化。我认为X上那个著名的梗是“一切如常”,对吧?我想这个时代已经结束了。我还不能确切地说出原因,但我认为任何关注时事的人都会告诉你,后疫情时代,世界变化快得多。
新冠疫情带来了一些混乱,或者也许只是我们处于不稳定的平衡中,而新冠疫情打破了这种平衡,然后我们经历了一次相变。
但现在世界似乎运转得快多了。地缘政治上如此。经济上如此。技术上如此。风投们现在被迫投资更多的硬件、火箭、无人机、AI——你知道的,如果你愿意这么叫的话,就是科幻技术。
所以我认为科幻技术需求旺盛。科幻科学家和科幻作家供应不足。科幻工程师供应不足。所以我们看到世界在转变,也许变得更好,也许变得更糟,但事情现在变化得非常非常快。
我们正活在那句中国诅咒里:“愿你活在有趣的时代。”
无人机使暴力民主化
尼维: 在硬件领域,你有什么想弄清楚的吗?
纳瓦尔: 我认为无人机的潜力仍未得到充分利用,尽管它们最近在战场上已变得引人注目。我们离无人机的终极形态还差得很远。我在这方面没有特别想弄清楚的事情。
我的意思是,我认为无人机防御将会非常困难,因为攻击型无人机既有动能优势——因为它向你俯冲下来——又有突袭优势,攻击方可以将所有攻击无人机集中在一个区域,而防御方总是兵力分散。防御方有一个优势,那就是短距离。相对于攻击无人机可能需要飞来的距离,防御方要向上拦截的距离要小得多。
但我认为无人机战争改变了社会中暴力的结构。所以它将从根本上改变军队乃至整个国家的架构方式。
你可以说,现代国家是步枪的产物,因为步枪让前农民能够在战场上干掉封建骑士。然后你需要工厂来制造步枪,你必须训练火枪手、武装他们、训练他们。因此,民族国家作为更合适的架构而兴起并取代了封建国家。
而到了后核时代,只有七到九个真正独立的主权国家,其他所有人都生活在某个国家的核保护伞下。所以这七到九个说了算,无论是在安理会还是其他地方。
所以核武器是1945年后暴力的新逻辑。
现在,暴力的最新逻辑是无人机。这将再次从根本上改变游戏规则,因为无人机将“相互确保摧毁”的逻辑带到了个人层面。如果你真的恨某人,在未来,无人机就能找到他。这是一种即将到来的奇特的暴力形式,它将从根本上重构我们所熟知的社会。
我不知道它会走向何方。是会出现少数几个非常强大的国家控制着所有无人机?还是无人机会变得如此民主化,以至于任何个人都能变得致命?
生物威胁也可能被民主化
纳瓦尔: 另外,我认为AI带来的恐惧之一就是生物武器。我不想让大家紧张,但理论上,如果你过去足够聪明,你可能已经知道了如何制造生物武器。但能够做到这一点的人——既拥有专业知识又拥有相关渠道的人——数量非常少。尽管这个数量仍然太高了,因为恰好从武汉生物武器实验室附近泄漏的新冠病毒就证明了这一点。
所以现在,这种能力将被民主化,就像“氛围编码”被民主化一样。现在能够进行“氛围编码”的人数是过去能够编程人数的数百倍甚至数千倍。同样地,能够获得生物武器或病毒的人数可能是过去的数十万倍。所以,这是一个相当可怕的想法。
现在,我们也可以做相反的事情,那就是希望同样的AI也能研究如何制造疫苗或如何制造阻止它们的东西。但问题是,所有官方研究——所有“好人”的研究——总是受到法规的限制,而且几乎没有比医疗法规更糟糕的法规了。
我认为,真正的机会之一是让AI来解决医学、生物学和疗法问题。但要实现这一点,你需要数据。你需要能够查看每个人的数据集。你需要能够查看所有的结果。你需要尽可能多的数据。
而这些数据隐藏在无数的孤岛、法规和规则背后。这有充分的理由——你不想针对个人。但如果你能对数据进行匿名化、清理,并允许这些数据集被公开,然后允许人们在“尝试权”下测试疗法,那么我认为你可以建立合理的防御。但我担心这只有在紧急情况下才会发生。
即使在新冠疫情期间,我们遇到了紧急情况,我们在疫苗上也花了很长时间,而事实证明这些疫苗的效果也不是特别好。但疫苗花了很长时间,因为我们就是不允许人们在志愿者和“尝试权”的框架下运作。
这花费了太长时间,而我认为在过去,你会有一群健康的年轻志愿者说:“当然,给我打这个疫苗,然后让我感染新冠。我愿意为团队牺牲。”
但现在,因为“生物伦理学家”,我们甚至不允许这样做。系统中的官僚主义太多了。有太多的人可以对少数试图做成事情的人说“不”。因此,我确实对未来有些担忧。
AI界面解锁硬件
纳瓦尔: 硬件领域还有什么有趣的?我认为硬件将经历一场复兴,因为历史上很多硬件的问题在于很难编写出好的软件。所以你拥有了所有令人难以置信的硬件,但软件很糟糕,导致设备本身功能不佳。
苹果做得非常好,因为他们将硬件与高质量的软件整合在一起。大多数公司只擅长一两件事。苹果把两件事都做得非常好:他们制造了伟大的硬件;他们构建了伟大的软件。他们在云计算和AI方面并不那么出色。例如,谷歌非常擅长云计算,也非常擅长AI,但他们不太擅长硬件。至于软件,我认为他们擅长某些类型的软件。他们擅长云软件——他们不擅长消费类软件。
现在,突然间,所有这些非常擅长硬件但不擅长软件的公司——它们可以做出足够好的软件了。或者它们甚至不需要做软件了。我的AI智能体将直接与硬件交互,我不再需要软件了。
所以,如果你是这样一个人,例如,你正在制造安防摄像头,或者你在制造儿童玩具,或者你在制造可编程灯具,突然间,相关的软件变得容易多了。你可以让一个聪明的孩子用Claude Code直接进去为你构建所有你需要的软件。或者也许你根本不需要任何软件,因为你的安防摄像头现在由每个人的智能体控制,不再需要定制软件了。所以我认为硬件本身正通过软件被解锁。
我认为这也是中国如此热衷于开源的原因之一。现在他们落后了,所以当你落后时,你会试图通过开源来追赶。我认为这也有一点是他们的民族自豪感,“我们同舟共济”。也许是政府资助并鼓励他们做开源。但这也很好地契合了他们的硬件主导地位。中国制造了大部分的消费电子产品,所以对他们来说,开源非常有利,因为它使他们的互补品商品化。
英伟达也是如此。英伟达只想卖出尽可能多的显卡,所以他们希望人们使用尽可能多的AI模型。所以他们希望所有这些都开源。所以你有一堆硬件玩家,包括中国的大部分公司和英伟达,他们的动机是:“嘿,一切都应该是开源的。”
超大规模云服务商也——他们希望所有东西都是开源的。所以他们推动AI模型的开源,然后这使软件商品化,而软件又解锁了更多硬件。所以我认为我们将看到越来越多有趣且实用的硬件,因为现在软件已经足够成熟,使得这些硬件被解锁并变得非常实用。
乐观需要创造力
尼维: 我并不会对未来感到恐惧或焦虑,一部分是因为我是个盲目的乐观主义者,另一部分是因为我生活在第一世界。
纳瓦尔: 是的,我不为此感到焦虑,因为我认为想象灾难场景比想象积极场景容易得多。因为乐观需要创造力。例如,失业问题就是一个明显的例子。很容易看到现有的工作如何消失,但很难预测下一份工作会是什么。然而,不可避免的是,总会有下一份工作。
正因为如此,我认为人们倾向于关注灾难场景。想象灾难的方法比想象崛起的方法容易得多。
没有人——两百年前没有人——能够想象到我们今天在技术进步、资本主义、经济以及各种社会崛起方面会达到今天的样子。他们根本无法想象。他们无法想象今天存在的10%的工作,因为那时每个人都在农场工作。但尽管如此,我们还是走到了今天。
同样地,我认为他们想象中的灾难场景实际上与我们今天想象的灾难场景非常相似——甚至一百年前也是如此。在我活着的每十年,总会出现新的环境灾难。总有人在谈论世界末日,因为环境问题。然后每十年,又会出现一场可能终结世界的战争。
是的,有时候你真的离得很近。新冠疫情很可怕。如果新冠病毒真的是一种更恶毒的病毒,我们可能会陷入困境。如果发生第三次世界大战,我们开始交换核弹,那将是一个非常糟糕的情况。所以这些东西更容易想象。它们对我们的心智来说更易于理解,所以我们把它们抓得更紧。
另外,结果如此灾难性,以至于人们显然会关注它。但我认为很难想象创造力。很难保持乐观。所以我认为我们必须培养乐观精神。我们必须奖励乐观精神。我们必须非理性地乐观,因为无论如何,这是唯一的出路。
所以,每当有人像“螃蟹效应”那样试图把乐观者拉下来,嘴里不停地喊着“末日、末日、末日”的时候,他们也许是对的,但这显然于事无补。那不是你想在散兵坑里一起战斗的人。
英文来源:
‘Nothing Ever Happens’ Is Over
Nivi: You’re listening to the Naval Podcast. This is Nivi. There’s no set topic for this episode—it will be a potpourri.
The Fully Interconnected Startup
Nivi: Naval, how are you using AI at Impossible, your current company, to change how you manage the business? Or are you guys just too small and a bunch of brilliant independent contributors where it’s not having an effect on how you actually run the company?
Naval: It’s more the latter. We’re a hub-and-spoke architecture. My co-founder is the CEO, and everyone kind of reports into him. He’s just kind of the one product manager who runs around with everything in his head to try to bring this whole impossible task together. And everybody interfaces through him, and people are pretty smart.
We keep a very flat structure. We try to push people to communicate with each other directly. We don’t even use Slack if that gives you a sense. So we’re not using AI as a communication method explicitly inside. But implicitly, AI is still very helpful. So we’re not like Square. I know Jack Dorsey has reorganized Square around AI and maybe Tobi at Shopify is doing that. There are some guys who are very good at organizational management and they do these kinds of experiments.
I’ve never been good at organizational management. I actually hate organizational management because I hate organizations. I hate large groups. I think it’s just so hard to get things done and you’re not dealing with the best and the brightest and there’s always politics. So I just prefer keeping groups small.
And we count on people to just operate independently and communicate with each other as needed. Like I said, we don’t even use Slack. We don’t use any project management software. I think it’s just GitHub.
And then when people want to talk to each other, they just text each other. Literally—they talk one-on-one. And sometimes it’s chaotic and they have to figure out who to navigate their way towards. But that’s part of the skillset.
It’s sort of like in computer networks. How do you organize a network for efficiency? Because at some point the communication overhead gets very high. The traditional answer is hierarchy. It’s a tree system. It’s like there’s one person at the top—the CEO—then they have a bunch of VPs or SVPs reporting to them. Then you have a bunch of VPs below that, and then middle managers and so on, and that keeps things organized and marching in one direction.
But it’s stifling. There’s a lot of politics. You can’t talk to people two or three levels below you unless you go founder mode like Elon or Brian Chesky, and then it’s celebrated as some wonderful achievement that all of a sudden the CEO is allowed to talk to an engineer. You can tell I’m being sarcastic there. Like I just think that’s a terrible way to operate, but it’s a requirement of size, and we’re just not at that size, so I don’t like it.
Instead, I like the fully interconnected graph. And that’s insane. Fully interconnected graph is everyone talking to anyone, with a light hub-and-spoke, with one person in the middle who’s trying to keep everything in their heads.
The thing about a fully interconnected graph in networking is that every node has to be highly intelligent. So that’s what you do. You hire highly intelligent people who can operate in a fully interconnected graph, and if they can’t navigate their way to the person they need to talk to, to solve a specific problem—or if they can’t cooperate or communicate with other people—then they don’t belong in this kind of an organization, and they should just go and find a hierarchical organization where they’re going to be more comfortable.
So we don’t really rely on any tools.
You Don’t Need the Explicit Intranet Anymore
Naval: Now, AI is implicitly still a very helpful tool within the organization, and I can give you two examples, although there are more.
One is just if you’re reading code that was written by somebody else, and it’s very complicated, you can just have the AI read it for you and give you a summary. Papers: they can read other people’s papers and give you a summary. It can actually go through the codebase and tell you who in the organization is likely to be an expert on what topic and guide you to them.
So AI can do a lot of that digging for you. You don’t need the explicit intranet as much anymore. You don’t need the explicit marking down of things because the AI can figure out where you are.
You could even unleash the AI on the codebase—on the designs. Like, say you have hardware designs, you can unleash them on designs. If you have suppliers and vendors, you can release them on the database or the file folder in which all the documents with suppliers and vendors are kept.
You could even unleash it on the company email if you wanted to and just say, “Where are we? How far are we actually from shipping? Draw me a Gantt chart based on where you think we actually are in terms of the estimates and the timelines, and who’s behind, and who’s ahead, and which divisions are lacking resources.”
AI can constantly be doing this data analysis and digging and reporting for you—reports on demand. You don’t need specific charts and dashboards and business integration systems. You can just have AI literally recreate it on the fly. You maybe don’t want to be doing it every time because it might be too slow, but you can have it build these dashboards on demand, and you can have it update them on demand. So that’s one huge thing.
The other is that traditionally in a company you would have the hardware people—and at a company like ours, you have the hardware people, you have the software people, and you have the AI people—and they kind of wouldn’t be doing each other’s work. But now with AI they can at least get to 20%–30% of others’ work. So it makes the gluing between them a little easier.
The AI people, for example, can create their own software harnesses if they need to test something. It may not be good for production deployment, but it’s better than having to sit around and wait for a software person to come by and write you some custom code.
Same way, the hardware people can also write a little bit of software to bring up a new hardware device, where otherwise they might have needed to wait for software people. So having AI just lets everybody do a little bit of everything. It makes them more generalist. And by being more generalist, it means that you have better touchpoints to interface with other people.
You don’t necessarily need to have someone write you an explicit API to work with their code. You can actually just have the AI go and discover an API or create its own API, or you can just bypass the AI and connect directly at whatever level it wants to, whether in the database or within the codebase. So it’s naturally a force multiplier, but we haven’t done anything explicit with it.
May You Live in Interesting Times
Nivi: What are you trying to figure out right now?
The reason I ask is because you rarely get to see work product from smart people while it’s in motion. One of my obsessions is trying to excavate the secrets and inner thoughts of smart people.
Naval: The world is very different than it was a few years ago. There are two, maybe four, companies that are dominating AI—or five if you count hardware with Nvidia. And the question is, “Is that the stable situation?”
Is this going to be a commodity business or is this going to be a monopoly business, or is it going to be an oligopoly business? Does it top out at some point? Do they run out of data and do the models stop improving? Or do we go all the way to AGI?
Certainly the people inside the labs are believers in AGI, and think that all value is going to disappear into the AI labs. Does this end up even more consolidated than the Mag 7 world, where there’s just Mag 2 or Mag 1?
Or does it somehow fragment? Does open source really have a chance? Or do people just always want the smartest model? And so for that, they’ll give up privacy, they’ll give up open source, and they’ll just pay up in the cloud?
So I think these are huge questions. Huge. These are world-shattering questions, but I don’t know the answer to this.
Can you train AI in a distributed way? Is distributed training possible, or are these things going to centralize more and more and more? I think now the conventional wisdom is going centralized training: two to four companies dominating, data centers and power are the limits, and everyone is rushing towards that.
But what if that’s wrong? That would be an interesting contrarian bet. But I don’t yet see the evidence. I think the emerging conventional wisdom for that part in AI is right.
As for AGI, I don’t know. I don’t want to be in the futurist business. Certainly the people in the frontier labs believe it. They’ve believed it for quite a while. The AI that I’m seeing has jagged intelligence. It’s also pretty bad at multimodal reasoning. I don’t think it has a good model of the world, although there are all these world model companies coming up.
Although I think they confuse something that looks like a world that you navigate in, which people are like, “Oh, that’s a world model, because it looks like you’re generating something that looks like a world, and I can wander around in it.”
That’s not a world model. A world model is when you have an agent that has a model of the world inside its head, which allows it to take actions and then predict the consequences of its actions, and then adjust its own behavior based on what happened—whether it learned or not—so you have like a reinforcement learning loop. That’s a world model.
And so we’re seeing world model companies emerging. I think Yann LeCun famously did one recently with JEPA. And so we are going to see new kinds of models, new kinds of agents, new kinds of intelligence. Are we going to get to AGI? I don’t know. Now that’s the same thing that everybody’s trying to figure out, right?
But this world is changing. The famous meme I think on X was like, “Nothing ever happens,” right? I think that’s over. I haven’t quite been able to put my finger on why, but I think anyone who is paying attention would tell you that post-COVID, the world is changing a lot faster.
There was some dislocation around COVID, or perhaps it was just we were in unstable equilibrium and COVID just broke that equilibrium, and then we had a phase shift.
But the world seems to be moving a lot faster now. And that’s true geopolitically. That’s true economically. That’s true technologically. VCs are now being forced to fund more hardware, rockets, drones, AI—you know, sci-fi technologies if you would call it.
So I think sci-fi technologies are in high demand. Sci-fi scientists and sci-fi authors are in low supply. Sci-fi engineers are in low supply. So we are seeing the world shift, and maybe it’s for the better, maybe it’s for the worse, but things are changing very, very fast now.
We are living within that Chinese curse of: ‘May you live in interesting times.’
Drones Democratize Violence
Nivi: Is there anything you’re trying to figure out in the world of hardware?
Naval: I think drones are still underleveraged, even though they’ve come to prominence on the battlefield recently. We still haven’t seen anywhere near the end game of drones. There’s nothing in particular I’m trying to figure out there.
I mean, I think drone defense is going to be very difficult, because a drone that’s attacking has the advantage of both kinetic energy—because it’s coming down on you—and it’s got the advantage of surprise, where the attacker can mass all the attack drones in one area, whereas the defender is always spread thin. The defender has one advantage, which is short range. The defender has to traverse a much smaller range going up than the attacking drone probably had to cover coming in.
But I think that drone warfare changes the structure of violence in society. So it’s going to actually fundamentally change how militaries and entire states are architected.
You could argue that the modern state rose up as a consequence of the rifle, because a rifle allowed a former peasant to take down a feudal knight on the battlefield. Then you need a factory to make rifles, and you had to drill musket men and arm them and train them. And so nation states sprung up and became dominant instead of feudal states as the right structure to do that within.
And then post-nuclear, there’s only seven to nine really independent sovereign nations, and everybody else lives underneath someone else’s nuclear umbrella. So those seven to nine call the shots, whether in the Security Council or elsewhere.
And so nuclear weapons were the new logic of violence after 1945.
Now the newest logic of violence is drones. And that’s going to fundamentally shift the game again, because drones bring the logic of mutually assured destruction down to the individual level. If you really hate somebody, in the future, a drone will be able to get them. That’s a weird form of violence coming up that’s going to basically restructure society as we know it.
I don’t know which way it goes. Is it going to be the case that you have a few very large, very powerful countries that control all the drones? Or is it that drones get so democratized that any individual can be deadly?
Biothreats Could Also Get Democratized
Naval: Also, I think one of the fears with AI is biological weapons. I don’t want to get people worked up but, in theory, if you were smart in the past, you could have figured out how to make a biological weapon. But the number of people who could have done it—who had both the expertise and had the access—were very low. Although it was still too high because the coronavirus that coincidentally got unleashed right next to the bioweapons lab in Wuhan figured it out.
So now that power is going to be democratized, just like vibe coding is democratized. Now the number of people who can vibe code is hundreds or thousands of times greater than the number of people who were coding. And so the same way, the number of people who can get access to biological weapons or viruses is hundreds of thousands of times what could have gotten access to them before. So that’s a pretty scary thought.
Now we can also do the opposite, which is hopefully now the same AIs can also research how to create vaccines or how to create things to stop them. But the problem is that all the official research—all the good guy research—is always gated behind regulations and there are almost no regulations out there as bad as medical regulations.
One of the real opportunities out there, I think, is for AI to solve medicine and biology and therapies. But to do that, you need the data. You need to be able to look at everyone’s dataset. You need to be able to look at all the outcomes. You want as much data as possible.
And this data is hidden behind so many silos, and so many regulations and rules. And for good reason—you don’t want to target individuals. But if you could anonymize, clean up, and allow that dataset to get out there, and then you could let people test therapies with a right to try, then I think you could have reasonable defenses. But my fear is this will only happen in an emergency situation.
Even during COVID, when we had the emergency situation, we took a long time with the vaccines, which turned out not to be that effective anyway. But it took a long time with the vaccines, because we just didn’t let people operate under volunteer situations and right to try.
It just took way too long, whereas I think in the old days you would’ve had a bunch of healthy, young volunteers would’ve said, “Sure, give me this vaccine and then give me COVID. I’ll take one for the team.”
But now because of “bioethicists,” we don’t even allow that. There’s just too much bureaucracy in the system. Too many people who can say “no” to the few people who are trying to get things done. And so for that, I do worry a little bit about the future.
AI Interfaces Unlock Hardware
Naval: What else is interesting in hardware? Hardware, I think, is going to undergo a renaissance, because historically the problem with a lot of hardware is that it’s very hard to write good software. And so you get all this incredible hardware coming out, but the software’s terrible so the device itself doesn’t function well.
Apple has done really well because they integrate hardware with high-quality software. Most companies do one or two things well. Apple does two things really well: they build great hardware; they build great software. They’re not that good at cloud and AI. Google is very good at cloud, and very good at AI, but they’re not very good at hardware, for example. And software, I would say they’re good at certain kinds of software. They’re good at cloud software—they’re not good at consumer software.
Now, all of a sudden, you have all these companies that are very good at hardware but not good at software—they can make good enough software. Or they don’t even need to make software. My AI agent will interact with the hardware directly and I don’t need software anymore.
So if you’re someone, for example, who is making security cameras, or you’re making toys for kids, or you’re making programmable lamps, all of a sudden the software for that just got a lot easier. You can have some bright kid with Claude Code, just get in there and build you all the software that you need. Or maybe you don’t need any software because your security cameras are now controlled by each person’s agent and don’t need custom software any longer. So I think that hardware itself is getting unlocked through software.
And this is, I think, one of the reasons why China is so big into open source. Now they’re behind, so when you’re behind, you try to catch up through open source. I think also it’s a little bit of their nationalist pride that, “We’re in it together.” Maybe the government’s funding them and encouraging them to do open source. But it also plays well into their hardware dominance. China is manufacturing most of the consumer electronics goods, and so for them, open source is hugely beneficial because it commoditizes their complement.
Same thing for Nvidia. Nvidia just wants to sell as many cards as possible, so they want people to use as many AI models as possible. So they want it all to be open source. So you have a bunch of hardware players, including most of China and Nvidia, whose incentive is, “Hey, it should all be open source.”
Hyperscalers also—they want it all open source. So they drive open source in the AI models, and then that commoditizes software, and the software unlocks more hardware. So I think we’re going to see more and more interesting usable hardware because now the software is figured out enough that that hardware becomes unlocked and quite usable.
Optimism Requires Creativity
Nivi: I don’t get scared or worked up about the future, partly because I’m a blind optimist and partly because I live in the first world.
Naval: Yeah, I don’t get worked up about it because I think it’s just so much easier to imagine doom scenarios than it is to imagine positive scenarios. Because optimism requires creativity. For example, the job loss thing is a clear example. It’s very easy to look at existing jobs and see how they will go away, but it’s very hard to predict what the next job will be. Yet inevitably there’s always a next job.
Because of that, I think people tend to fixate on the doom scenarios. It’s much easier to imagine the methods of doom than to imagine the methods of rising up.
There is no one—no one 200 years ago—who could have imagined how we would end up where we are today in terms of technological advancement and capitalism and economics and the rise of various societies. They just couldn’t have imagined it. They couldn’t have imagined 10% of the jobs that exist today, because back then everybody was working on a farm. But nevertheless, here we are.
So the same way, I think the doom scenarios they imagined are actually very similar to the same doom scenarios that we imagine today—like even a hundred years ago. Every decade I’ve been alive, there’s been a new environmental catastrophe to come along. Someone’s talking about the end of the world because of the environment. And then every decade there’s a catastrophe coming along because of a war that’s going to end the world.
Yeah, sometimes you get really close. COVID was scary. If COVID had actually turned out to be a much more nasty virus, we could have been in a bad spot. If there was a World War III where we start exchanging nukes, that would be a very bad scenario. So these things are easier to imagine. They’re more legible to our minds, so we hold them closer to us.
Plus the outcome there is so catastrophic that people obviously fixate on it. But I think it’s very hard to imagine creativity. It’s very hard to be optimistic. And so I think we have to nurture optimism. We have to reward optimism. We have to be irrationally optimistic, because that’s the only way out of this anyway.
So whenever people do the crabs in a bucket thing where they try to pull the optimists back down and they keep saying, “Doom, doom, doom,” they might be right, but it’s certainly not helping matters. That’s not the person you want to be in a foxhole with.