西雅图初创公司Glacis邀请微软资深领袖加盟,瞄准人工智能最大盲区。

内容总结:
西雅图初创公司Glacis推出AI“黑匣子”,为人工智能系统装上可审计的安全记录仪
当企业将人工智能系统投入实际应用时,一个普遍而严峻的挑战浮出水面:如何全程监控并验证AI的决策行为是否安全、可控且符合预期?一家名为Glacis的西雅图初创公司正试图用一套“防篡改记录”系统来回答这个问题。
源于失败教训的创业火花
Glacis的创立直接源于其联合创始人兼CEO乔·布雷德伍德的一次痛苦经历。他此前创办的AI心理健康工具Yara,因发现AI模型在与脆弱用户的长时间对话中逐渐偏离既定行为轨道而被迫关闭。他在LinkedIn上分享这一经历后,来自监管机构、临床医生、工程师和保险业高管的反馈指向同一个核心问题:当AI做出决策时,无人能独立验证其安全控制措施是否真正起效。这成为了创立Glacis的初衷。
核心产品:构建不可篡改的AI行为轨迹
Glacis由布雷德伍德与华盛顿大学精神病学兼职教授詹妮弗·香农博士共同创立,近期迎来了第三位联合创始人兼CTO罗希特·塔塔查尔。塔塔查尔在微软Azure拥有近19年的工程师与产品负责人经验,他观察到许多公司虽构建了AI工具,却因无法解释或验证系统行为而难以将其投入实际生产。
公司的核心产品名为“Arbiter”(仲裁者)。其工作原理是拦截每一次AI推理调用,并为输入内容、运行的安全检查及最终输出生成一份加密签名的记录。该记录事后无法被篡改。在更大规模上,Glacis的“见证者网络”系统会将这些记录公证为可审计的轨迹。客户可选择在“影子模式”下运行该系统(仅观察不干预),或在“执行模式”下运行,由系统主动约束AI的行为。
聚焦高风险领域,解决三大维度问题
作为公司首席医疗官,香农博士特别强调了医疗健康领域的高风险。她以亲身经历举例,AI辅助记录工具曾在其临床笔记中“幻觉”出她从未开具的药物处方。她指出,若无追溯AI决策每一步的基础设施,临床医生将面临责任风险。
塔塔查尔将企业面临的挑战归纳为三个维度:客户基础设施的基线状态、模型行为,以及“意图漂移”(即系统行为偏离客户初衷,即使底层模型运行正常)。Glacis的监控覆盖了这三个方面。“只有当这三者汇聚时,客户才能真正看清发生了什么。”塔塔查尔说。
发布开源工具与标准,应对AI安全新挑战
近期,Glacis发布了新的开源工具“auto-redteam”,该工具可自动攻击AI系统以测试一系列漏洞类别,随后生成修复方案并验证其有效性。公司还发布了“OVERT 1.0”标准,旨在为机构提供一个将“可验证的AI安全”构建入其运营的框架。
此举正值AI智能体安全形势多变之际。自2025年底亮相以来,开源AI智能体框架OpenClaw已吸引数十万开发者,但其安全架构未能跟上快速采用的步伐。包括CrowdStrike和思科在内的多家网络安全巨头已发布分析报告,警告该框架存在安全漏洞。布雷德伍德认为,这正说明需要能够在运行时强制执行安全控制的基础设施,而不仅仅是部署前测试。
市场策略与未来发展
Glacis目前专注于医疗健康、金融科技和保险领域的客户。今年初的摩根大通医疗健康会议后,公司已签署两项试点协议,另有三项正在推进中。布雷德伍德表示,医疗健康是切入点,但该问题最终普遍存在于任何AI部署场景。
为扩大技术可及性,Glacis本周开放了入门级套餐的等候名单,每月49美元,涵盖红队测试、安全执行和加密证明服务,每月最多处理1万次AI事件。此外还有每月499美元的专业层级。
目前,Glacis已筹集57.5万美元资金,投资方包括Geoff Ralston的Safe Artificial Intelligence Fund、Mighty Capital等。公司也是Cloudflare Launchpad计划和Plug and Play西雅图第三期加速器的成员,并计划在今年晚些时候完成种子轮融资。
尽管团队仅有5名正式成员(包括三位联合创始人和两名工程师),但布雷德伍德笑称他们拥有一个“百人公司”——“其中5个是真人,其余都在云端或桌面上(指AI助手)”。团队甚至计划引入一个AI智能体作为“第六名员工”,负责通过Vanta处理SOC 2合规工作。
在一个由资金雄厚的初创公司和大型企业主导的AI可观测性与安全市场,Glacis试图以“加密可证明性”作为差异化优势——不仅发现问题,更能生成安全控制已运行的防篡改证据。布雷德伍德认为,这将有助于企业进行保险谈判并满足监管要求。
中文翻译:
作为微软Azure的资深工程师和产品负责人,罗希特·塔塔查发现许多公司正在构建无法在生产环境中全面监控或控制的人工智能系统。如今,他在西雅图一家初创公司的新岗位上正着手解决这个问题。
塔塔查目前是Glacis公司的联合创始人兼首席技术官。该公司致力于构建防篡改的人工智能行为记录系统——被首席执行官乔·布雷德伍德称为"企业人工智能的黑匣子"。他的加入恰逢Glacis发布用于监控人工智能代理的新型开源工具。
这家公司由布雷德伍德与华盛顿大学精神病学家兼客座教授詹妮弗·香农博士共同创立,最早于2025年11月获得GeekWire报道。其创立源于一个惨痛教训:布雷德伍德此前创立的人工智能心理健康工具Yara,因发现模型在与脆弱用户长时间对话中出现行为偏离而被迫关停。
当他在领英上发布关停说明后,监管机构、临床医生、工程师和保险高管纷纷联系他,并指出相同问题:当人工智能系统做出决策时,无人能独立验证安全控制措施是否真正生效。这成为创立Glacis的契机。
运作原理:该公司的核心产品"仲裁者"会介入每次人工智能推理调用,创建包含输入内容、运行的安全检查及最终输出的签名记录。这些记录事后无法篡改。大规模应用时,被称作"见证网络"的系统会将记录公证为可审计的轨迹。
客户可选择"影子模式"(仅观察不干预)或"执行模式"(主动约束人工智能行为)运行系统。首席医疗官香农指出,医疗健康领域对此需求尤为迫切。作为执业儿童精神科医生,她曾目睹人工智能医疗记录工具在临床笔记中虚构内容,甚至伪造她从未开具的药物处方。
"我需要能够追溯查看人工智能模型做出决策的每个步骤,"她表示,"若缺乏相应基础设施,责任该由谁承担?没人会起诉人工智能,最终担责的是我。"
核心挑战:拥有近19年微软工作经验的塔塔查指出,许多公司虽能构建工具并完成概念验证,却因无法解释或验证系统行为而难以将人工智能投入生产。他认为问题存在于三个维度:客户基础设施的基准状态、模型行为,以及"意图漂移"(即系统行为偏离客户预期,即使底层模型运行正常)。Glacis的监控体系覆盖这三个维度。"只有将三者结合,客户才能真实掌握发生的情况,"塔塔查强调。
新品发布:Glacis推出开源工具"自动红队",可自动攻击人工智能系统的多类漏洞,随后生成修复方案并验证有效性。同时发布的OVERT 1.0标准旨在为机构构建可验证的人工智能安全框架。这些发布正值人工智能代理安全的多变时刻——2025年末问世的开源框架OpenClaw虽吸引数十万开发者,但其安全架构已滞后于快速普及。包括CrowdStrike和思科在内的网络安全公司已发布漏洞预警分析。
目标市场:公司聚焦医疗健康、金融科技和保险领域客户,今年初在摩根大通医疗健康会议上签署两项试点协议,另有三项正在推进。布雷德伍德表示医疗健康是切入点,但认为该问题普遍存在于所有人工智能部署场景。
本周动态:Glacis开放每月49美元的入门套餐预约,涵盖红队测试、执行验证和加密认证服务(每月最多1万次人工智能事件),499美元专业版支持10万次事件。布雷德伍德称此举旨在让该技术突破现有受监管企业和设计合作伙伴的范畴。
行业格局:人工智能可观测性与安全市场蓬勃发展,多家资金雄厚的初创企业和大公司提供运行时监控及防护服务。布雷德伍德指出Glacis的差异化在于专注加密可验证性——不仅发现问题,更提供安全控制已运行的防篡改证据,这有助于企业协商保险条款并满足监管要求。
融资情况:Glacis已从杰夫·罗尔斯顿的SAIF基金、Mighty Capital、Sourdough Ventures及AI2孵化器等投资者处募集57.5万美元,同时加入Cloudflare启动平台和Plug and Play西雅图第三期加速器。公司计划今年晚些时候完成种子轮融资。
团队构成:公司现有五名员工(含三位联合创始人和两名工程师)。塔塔查透露第六位"员工"将是负责通过Vanta处理SOC 2合规工作的人工智能代理。团队使用Rust语言编写核心加密代码,工作流程中运用Claude、Codex和ChatGPT。"我们拥有百人规模的公司,"布雷德伍德笑称,"其中五人是真实的,其余都在云端或桌面上。"
英文来源:
As a veteran engineer and product leader inside Microsoft Azure, Rohit Tatachar saw that many companies were building AI systems they couldn’t fully monitor or control in production.
In his new role at a Seattle startup, he’s doing something about it.
Tatachar is now co-founder and CTO of Glacis, which builds tamper-proof records of AI behavior — what CEO Joe Braidwood has called a “flight recorder for enterprise AI.” His arrival comes as Glacis launches new open-source tools for monitoring and controlling AI agents.
Glacis, first covered by GeekWire in November 2025, was started by Braidwood and Dr. Jennifer Shannon, a psychiatrist and adjunct professor at the University of Washington.
The company grew out of a difficult lesson: Braidwood’s previous startup, Yara, an AI-powered mental health tool, had to be shut down after he realized the models drifted from their intended behavior during extended conversations with vulnerable users.
After he wrote about the shutdown on LinkedIn, regulators, clinicians, engineers and insurance executives reached out with the same observation: when AI systems make decisions, nobody can independently verify whether the safety controls actually worked.
That was the spark for Glacis.
How it works: The startup’s core product, called Arbiter, sits in the path of every AI inference call and creates a signed record of the input, the safety checks that ran and the final output.
The record can’t be altered after the fact. At scale, a system that Glacis calls the Witness Network notarizes those records into an auditable trail.
Customers can choose to run the system in “shadow mode,” observing without intervening, or in enforcement mode, where it actively constrains the AI’s behavior.
Shannon, Glacis’ chief medical officer, said the stakes are especially high in healthcare. As a practicing child psychiatrist, she has seen AI-powered ambient scribes hallucinate content in her clinical notes, including fabricating medication prescriptions she never made.
“I would like to be able to go back and see every step of how that AI model made that decision,” she said. “If there’s no infrastructure for that, who is liable? Nobody’s going to sue AI. It’s me.”
The underlying challenge: Tatachar worked at Microsoft across two stints spanning nearly 19 years, most recently as a principal product manager on the Microsoft Foundry team, its platform for building and deploying enterprise AI applications and agents.
He said he saw companies building tools and running proofs of concept but struggling to move AI into production because they couldn’t explain or verify what their systems were doing.
There are three dimensions to the problem, he said: the baseline state of a customer’s infrastructure, model behavior, and what’s known as “intent drift,” where a system behaves differently than what a customer intended, even if the underlying model is functioning normally.
Glacis monitors deployments across all three. “It’s only when you converge these three that a customer has a real view of what actually happened,” Tatachar said.
New releases: Glacis is releasing auto-redteam, an open-source tool that automatically attacks AI systems across a range of vulnerability categories, then generates fixes and verifies their effectiveness.
The company is also publishing OVERT 1.0, a standard for what it calls “observable verification evidence for runtime trust,” intended to give organizations a framework for building provable AI safety into their operations.
The launches come at a volatile moment for AI agent security. OpenClaw, an open-source AI agent framework, has attracted hundreds of thousands of developers since its debut in late 2025, but its rapid adoption has outpaced its security architecture.
Major cybersecurity firms including CrowdStrike and Cisco have published analyses warning of security vulnerabilities in the framework. Braidwood said this shows the need for infrastructure that can enforce safety controls at runtime, not just test them before deployment.
Target market: The company is focusing on customers in healthcare, fintech and insurance.
It signed two pilot deals out of the JP Morgan healthcare conference earlier this year, with three more in the pipeline. Braidwood said the company sees healthcare as its entry point, but considers the problem ultimately universal to any deployment of AI.
A new development this week: Glacis is also opening a waitlist for a $49-per-month starter plan covering red teaming, enforcement and cryptographic attestation for up to 10,000 AI events per month. A $499 pro tier covers up to 100,000 events.
Braidwood said the move is a deliberate shift toward making the technology accessible beyond the regulated enterprises and design partners the company has worked with so far.
Broader landscape: AI observability and security is a booming market, with well-funded startups and big companies offering runtime monitoring and guardrails for enterprise AI.
Braidwood said Glacis differentiates itself through its focus on cryptographic provability — not just detecting problems but producing tamper-proof evidence that safety controls ran, which he said could help companies negotiate insurance coverage and satisfy regulators.
Funding: Glacis has raised $575,000 from a group of investors that includes Geoff Ralston’s Safe Artificial Intelligence Fund, Mighty Capital, Sourdough Ventures and the AI2 Incubator.
It is also part of Cloudflare’s Launchpad program and Plug and Play’s third Seattle accelerator cohort. Braidwood said the company hopes to close a seed round later this year.
Team: Glacis has five employees, including the three co-founders and two engineers.
Tatachar said the company’s sixth “employee” will be an AI agent tasked with handling SOC 2 compliance work through Vanta. The team writes its core cryptographic code in Rust and uses Claude, Codex, and ChatGPT across its workflow.
“We’ve got a 100-person company,” Braidwood joked. “Five of them are real, and the rest are in the cloud or on the desk.”
文章标题:西雅图初创公司Glacis邀请微软资深领袖加盟,瞄准人工智能最大盲区。
文章链接:https://news.qimuai.cn/?post=3750
本站文章均为原创,未经授权请勿用于任何商业用途