人工智能需要强大的数据架构来创造商业价值。

内容总结:
企业AI落地遇数据瓶颈,专家呼吁构建“数据经纬”以激活业务价值
随着人工智能从实验阶段快速进入企业日常运营,其在财务、供应链、人力资源及客户运营等核心职能中的应用日益广泛。然而,企业领导者们逐渐发现,AI规模化落地的最大障碍并非模型性能或算力,而是其赖以决策的数据质量与业务背景的缺失。
SAP数据与分析部门总裁兼首席产品官伊尔凡·汗指出,AI虽能快速生成结果,但若缺乏对业务背景的理解,便无法做出正确判断,甚至可能导致决策失误。“速度若缺乏判断力,不仅无益,反而可能带来损害。”他强调。
当前,许多企业的传统数据策略侧重于将数据集中存储于数据仓库或湖中,虽便于生成报表与洞察,却在整合过程中丢失了数据与具体业务流程、政策及现实决策间的关联意义。这种“背景信息”的缺失,在由人类专家补足的过去尚可应对,但对于需自主行动的AI系统而言,则构成严重局限——系统可能产出技术上准确、但操作上错误的答案。
为此,行业正将目光转向一种名为“数据经纬”的架构解决方案。它并非简单地将数据物理集中,而是通过一个抽象层,跨应用、跨云、跨系统连接信息,并保留描述业务运作方式的语义逻辑。其核心在于整合三大要素:提供速度的智能计算、提供业务理解的知识池,以及基于该理解自主行动的智能体。
汗解释说,一个精心设计的数据经纬能使AI系统与业务知识而非原始存储系统交互,确保自动化决策反映真实的业务优先级,从而实现跨系统与智能体的协调。这要求企业具备跨环境的数据可访问性、通过知识图谱等技术统一语义的知识层,以及贯穿始终的治理与策略执行能力。
调查显示,仅少数企业认为自身数据管理高度成熟或已为AI集成做好充分准备。然而,汗指出,大多数企业其实已拥有构建数据经纬所需的大量知识——多年的运营数据、主数据、工作流和政策逻辑已存在于各类业务应用中。“关键并非从零创造背景信息,而是激活并连接企业中既有的背景信息。”
随着智能体AI时代的到来,监控、分析与决策责任正加速向软件转移。若缺乏一个将分散数据连接起来的共同知识层,跨部门运作的多个智能体可能因目标冲突而无法协调。构建数据经纬,正成为企业从AI实验迈向真正自动化、实现可信且一致决策的关键基础。
中文翻译:
赞助内容
AI需要强大的数据架构才能释放商业价值
现代数据架构能将企业现有知识转化为AI可信赖的基石。
与SAP联合呈现
人工智能正在企业中快速从实验阶段迈向日常应用。各组织正在财务、供应链、人力资源和客户运营等领域部署智能副驾、智能体和预测系统。近期一项调查显示,到2025年底,半数企业已在至少三个业务职能中应用了AI。
然而,随着AI融入核心工作流,企业领导者发现,最大的障碍并非模型性能或算力,而是这些系统所依赖数据的质量与上下文。AI本质上引入了一项新要求:系统不仅要能访问数据,还必须理解数据背后的业务背景。
SAP数据与分析部门总裁兼首席产品官伊尔凡·汗指出,缺乏背景信息,AI虽能快速生成答案,但仍可能做出错误决策。
"AI非常擅长产出结果,"他说,"它速度很快,但若缺乏背景信息,就无法做出良好判断,而良好的判断才能为企业创造投资回报。没有判断的速度毫无助益,甚至可能带来损害。"
在自主系统和智能应用兴起的时代,这种背景层正变得至关重要。汗认为,为了提供背景信息,企业需要一个精心设计的数据架构,其功能远不止于整合数据。恰当的数据架构能让企业安全地扩展AI应用,协调跨系统和智能体的决策,并确保自动化反映真实的业务优先级,而非孤立地做出决策。
认识到这一点,许多组织正在重新思考其数据架构。他们不再仅仅是将数据移入单一存储库,而是寻求在应用、云和运营系统之间连接信息的方法,同时保留描述业务运作方式的语义。这一转变正推动数据架构作为AI基础设施基础而日益受到关注。
背景缺失是AI的关键难题
传统数据策略主要聚焦于聚合。过去二十年,企业大量投资于从运营系统中提取信息并加载到集中式数据仓库、数据湖和仪表板中。这种方法便于生成报告、监控绩效和获取跨业务洞察,但在此过程中,数据所附带的大量含义——它与政策、流程和现实决策的关联——却丢失了。
以两家使用AI管理供应链中断的公司为例。如果一家仅使用库存水平、交付周期和供应评分等原始信号,而另一家则增加了跨业务流程、政策和元数据的背景信息,两个系统都能快速分析数据,但很可能得出不同结论。
汗解释道,诸如哪些客户是战略账户、短缺期间可接受的权衡方案是什么、以及延伸供应链的状态等信息,将使一个AI系统能够做出战略决策,而另一个系统则因缺乏恰当的背景信息而无法做到。
"两个系统都运行得很快,但只有一个方向正确,"他说,"这就是'背景溢价',也是当你的数据基础在设计上能跨流程、政策和数据保留背景时所获得的优势。"
过去,企业通过依赖人类专家提供缺失信息来隐性应对背景缺失问题,但在AI时代,这种缺失会造成严重的局限性。AI系统不只是展示信息,还会据此行动。如果系统不解释数据为何重要,AI模型可能会为错误的结果进行优化。库存数量、付款历史或需求信号可能准确,但它们未必能揭示哪些客户必须优先考虑、哪些合同义务适用,或哪些产品具有战略重要性。结果,系统可能产出技术上正确但操作上有缺陷的答案。
这种认识正在改变企业对AI准备度的看法。大多数企业承认,他们尚未建立成熟的数据流程和基础设施来信任其数据和AI系统。只有五分之一的企业认为自身的数据方法高度成熟,仅9%的企业感觉已完全准备好整合其数据系统并实现互操作。
不要简单合并,而要整合
新兴的解决方案是数据架构:一个跨越基础设施、架构和逻辑组织的抽象层。对于智能体AI而言,该架构成为主要接口,使智能体能够与业务知识交互,而非原始存储系统。知识图谱在其中扮演核心角色,使智能体能够使用自然语言和业务逻辑查询企业数据。
数据架构的价值依赖于三个组成部分:提供速度的智能计算、提供业务理解和背景的知识池,以及基于该理解提供自主行动的智能体。汗表示,其强大之处在于这些能力如何协同工作。
技术提供了架构——一个使智能体间通信与协调成为可能的基础。流程将定义业务和IT如何共享所有权,并建立治理和一种足以让人信任并采用它的文化。如今,这三者必须协同工作,业务数据架构才能真正成功。
"它赋能自信、一致的决策。当这些元素结合在一起时,AI就不仅仅是分析和解读数据——它能推动更智能、更快速的决策,真正创造业务影响,"他说,"这就是一个精心设计的业务数据架构所承诺的未来:每个部分都相互强化,每个洞察都基于信任和清晰度。"
从技术上讲,构建数据架构层需要多种能力。数据必须能够通过联邦方式而非强制整合,在多个环境中被访问。需要一个语义或知识层来协调跨系统的含义,这通常由知识图谱和目录驱动的元数据支持。治理和政策执行也必须在整个架构中运作,以便AI系统能够安全、一致地访问数据。
这些要素共同创建了一个基础,使AI能够与业务知识而非原始存储系统交互——这是从实验迈向真正企业自动化的重要一步。
超越数据孤岛和仪表板
在智能体AI兴起的时代,基于数据进行监控、分析和决策的责任日益转移到软件上。AI智能体可以监控事件、触发工作流并实时做出决策,通常无需人工直接干预。这种速度创造了新机遇,但也提高了风险。当多个智能体在财务、供应链、采购或客户运营等领域运作时,它们必须基于对业务优先级的相同理解来指导行动。
如果没有一个将不同数据连接在一起的共同知识层,系统间的协调很快就会崩溃。一个系统可能为利润率优化,另一个为流动性,再一个为合规性,各自基于不同的数据片段工作。
重要的是,汗指出,大多数企业已经拥有实现这一目标所需的大部分知识。多年的运营数据、主数据、工作流和政策逻辑已存在于各业务应用中——企业只需使其可访问即可。部署数据架构的企业对其数据信任度更高,超过三分之二的企业看到数据可访问性、数据可见性得到改善,并对数据拥有更多控制权。
"机会并非从零开始创造背景,而是激活并连接您业务中已有的背景,"他补充道,数据架构是"确保数据语义、业务流程和政策在所有云中作为一个统一系统连接的架构。"
深度解读
人工智能
OpenAI正全力构建全自动研究员
与OpenAI首席科学家雅各布·帕乔基的独家对话,探讨其公司的新宏伟挑战及AI的未来。
《Pokémon Go》如何为送货机器人提供厘米级精度的世界视图
独家报道:Niantic的AI分拆公司正利用从玩家处众包的300亿张城市地标图像训练一个新的世界模型。
想了解AI的现状?看看这些图表
根据斯坦福大学《2026年AI指数报告》,AI正在冲刺,而我们正努力跟上。
这家初创公司想改变数学家做数学的方式
Axiom Math正在免费提供一个强大的新AI工具。但它是否能像公司希望的那样加速研究,仍有待观察。
保持联系
获取《麻省理工科技评论》的最新动态
发现特别优惠、头条新闻、即将举办的活动等更多内容。
英文来源:
Sponsored
AI needs a strong data fabric to deliver business value
A modern data fabric makes it possible to turn existing enterprise knowledge into a trusted foundation for AI.
In partnership withSAP
Artificial intelligence is moving quickly in the enterprise, from experimentation to everyday use. Organizations are deploying copilots, agents, and predictive systems across finance, supply chains, human resources, and customer operations. By the end of 2025, half of companies used AI in at least three business functions, according to a recent survey.
But as AI becomes embedded in core workflows, business leaders are discovering that the biggest obstacle is not model performance or computing power but the quality and the context of the data on which those systems rely. AI essentially introduces a new requirement: Systems must not only access data — they must understand the business context behind it.
Without that context, AI can generate answers quickly but still make the wrong decision, says Irfan Khan, president and chief product officer of SAP Data & Analytics.
"AI is incredibly good at producing results," he says. "It moves fast, but without context it can't exercise good judgment, and good judgment is what creates a return on investment for the business. Speed without judgment doesn't help. It can actually hurt us."
In the emerging era of autonomous systems and intelligent applications, that context layer is becoming essential. To provide context, companies need a well-designed data fabric that does more than just integrate data, Khan says. The right data fabric allows organizations to scale AI safely, coordinate decisions across systems and agents, and ensure that automation reflects real business priorities rather than making decisions in isolation.
Recognizing this, many organizations are rethinking their data architecture. Instead of simply moving data into a single repository, they are looking for ways to connect information across applications, clouds, and operational systems while preserving the semantics that describe how the business works. That shift is driving growing interest in data fabric as a foundation for AI infrastructure.
Losing context is a critical AI problem
Traditional data strategies have largely focused on aggregation. Over the past two decades, organizations have invested heavily in extracting information from operational systems and loading it into centralized warehouses, lakes, and dashboards. This approach makes it easier to run reports, monitor performance, and generate insights across the business, but in the process, much of the meaning attached to that data — how it relates to policies, processes, and real-world decisions — is lost.
Take two companies using AI to manage supply-chain disruptions. If one uses raw signals such as inventory levels, lead times, and supply scores, while the other adds context across business processes, policies, and metadata, both systems will rapidly analyze the data but likely come up with different conclusions.
Information such as which customers are strategic accounts, what tradeoffs are acceptable during shortages, and the status of extended supply chains will allow one AI system to make strategic decisions, while the other will not have the proper context, Khan says.
"Both systems move very quickly, but only one moves in the right direction," he says. "This is the context premium and the advantage you gain when your data foundation preserves context across processes, policies and data by design."
In the past, companies implicitly managed a lack of context because human experts provided the missing information, but with AI, there is a shortfall and that creates serious limitations. AI systems do not just display information; they act on it. If a system does not explain why data matters, an AI model may optimize for the wrong outcome. Inventory numbers, payment histories, or demand signals might be accurate, but they do not necessarily reveal which customers must be prioritized, which contractual obligations apply, or which products are strategically important. As a result, the system can produce answers that are technically correct but operationally flawed.
This realization is changing how companies think about AI readiness. Most acknowledge that they do not have the mature data processes and infrastructure in place to trust their data and their AI systems. Only one in five organizations consider their approach to data to be highly mature, and only 9% feel fully prepared to integrate and interoperate with their data systems.
Don't consolidate, integrate
The emerging solution is a data fabric: An abstraction layer that spans infrastructure, architecture, and logical organization. For agentic AI, the fabric becomes the primary interface, allowing agents to interact with business knowledge rather than raw storage systems. Knowledge graphs play a central role, enabling agents to query enterprise data using natural language and business logic.
The value of the data fabric relies on three components: Intelligent compute to provide speed, a knowledge pool to provide business understanding and context, and agents to provide autonomous action are grounded in that understanding. What makes this powerful is how these capabilities work together, says Khan.
The technology provides the architecture — a foundation that makes agent-to-agent communication and coordination possible. The process will define how businesses and IT share ownership, and establish governance and a culture in which people trust enough to adopt it. Now all three things must work together for a business data fabric to truly be successful.
"It empowers confident, consistent decisions, and when these elements all come together, AI just doesn't analyze and interpret the data — it drives smarter, faster decisions that really create business impact," he says. "This is the promise of a thoughtfully designed business data fabric, where every part reinforces the other, and every insight is grounded in trust and clarity."
Technically, building a data-fabric layer requires several capabilities. Data must be accessible across multiple environments through federation rather than forced consolidation. A semantic or knowledge layer is needed to harmonize meaning across systems, often supported by knowledge graphs and catalog-driven metadata. Governance and policy enforcement must also operate across the fabric so that AI systems can access data securely and consistently.
Together, these elements create a foundation where AI interacts with business knowledge instead of raw storage systems — an essential step for moving from experimentation to real enterprise automation.
Beyond data isolation and dashboards
In the emerging era of agentic AI, the responsibility for monitoring, analyzing, and making decisions based on data increasingly shifts to software. AI agents can monitor events, trigger workflows, and make decisions in real time, often without direct human intervention. That speed creates new opportunities, but it also raises the stakes. When multiple agents operate across finance, supply chain, procurement, or customer operations, they must be guided by the same understanding of business priorities.
Without a common knowledge layer connecting disparate data together, coordination between systems quickly breaks down. One system might optimize for margin, another for liquidity, and another for compliance, each working from a different slice of data.
Importantly, most enterprises already possess much of the knowledge needed to make this work, says Khan. Years of operational data, master data, workflows, and policy logic already exist across business applications — companies just need to make it accessible. Companies that deploy data fabrics gain greater trust in their data, with more than two thirds of enterprises seeing improved data accessibility, data visibility, and exerting more control over their data.
"The opportunity isn't just inventing context from scratch, it's activating and connecting the context across your business that already exists," he continues, adding that a data fabric is the "architecture that ensures data semantics, business processes and policies are connected as a unified system across all the clouds."
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
Deep Dive
Artificial intelligence
OpenAI is throwing everything into building a fully automated researcher
An exclusive conversation with OpenAI’s chief scientist, Jakub Pachocki, about his firm's new grand challenge and the future of AI.
How Pokémon Go is giving delivery robots an inch-perfect view of the world
Exclusive: Niantic's AI spinout is training a new world model using 30 billion images of urban landmarks crowdsourced from players.
Want to understand the current state of AI? Check out these charts.
According to Stanford’s 2026 AI Index, AI is sprinting, and we’re struggling to keep up.
This startup wants to change how mathematicians do math
Axiom Math is giving away a powerful new AI tool. But it remains to be seen if it speeds up research as much as the company hopes.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.