Meta 借助 AWS 芯片协议扩展其 AI 基础设施

qimuai 发布于 阅读:9 一手编译

Meta 借助 AWS 芯片协议扩展其 AI 基础设施

内容来源:https://aibusiness.com/agentic-ai/meta-scales-ai-infrastructure-aws-chip-deal

内容总结:

科技巨头竞相布局AI基础设施:Meta与亚马逊达成大规模芯片合作协议

为加速人工智能(AI)算力扩张,Meta与亚马逊云服务(AWS)近日签署了一项新协议。根据协议,Meta将大规模部署亚马逊自研的Graviton系列通用芯片,用于支持其“智能体AI”(Agentic AI)的发展。这是继多家科技巨头近期密集签订芯片大单之后的又一重磅合作。

亚马逊方面表示,其最新款Graviton芯片的缓存容量是上一代的五倍,能够实现更快的数据处理和更大的带宽,这对于处理智能体AI中的推理、编排及内存管理等计算密集型任务至关重要。随着大语言模型训练高度依赖GPU,智能体AI的兴起正推动市场对高性能CPU的需求激增。

Meta基础设施负责人桑托什·贾纳丹表示,扩展至Graviton芯片,使其能够在自身规模下,以所需的性能和效率运行智能体AI背后的CPU密集型工作负载。他强调,分散计算资源来源是支撑Meta AI雄心的战略重点。

亚马逊副总裁纳菲亚·布沙拉在博客中称,这不仅关乎芯片本身,更是为全球数十亿用户打造能够理解、预测并高效扩展的AI系统提供基础设施基石。

近期,各大AI厂商正争相锁定下一代AI基础设施。本月早些时候,OpenAI与Anthropic分别扩大了与亚马逊的合作,以加速部署其自研Trainium芯片。今年2月,Meta还与AMD达成价值1000亿美元的芯片协议,并扩大了与英伟达的合作。4月,Meta进一步深化与博通的合作,共同设计开发AI专用芯片。

中文翻译:

由谷歌云赞助
选择您的首个生成式AI应用场景
要开始使用生成式AI,应首先聚焦于能够改善人类与信息交互体验的领域。

该协议是近期一系列重大芯片交易中的最新一例,科技巨头们正竞相扩大AI算力规模。
Meta签署了一项新协议,将部署数百万枚亚马逊通用芯片,这是该社交媒体巨头AI扩展计划的一部分。
根据协议,Meta将获得亚马逊Graviton系列处理器的使用权,该系列专门为智能体AI设计。

虽然大语言模型等工具依赖GPU进行训练,但智能体AI的兴起正在增加对高性能CPU的需求,以支持推理以及编排、内存管理等计算密集型任务。
亚马逊表示,其最新Graviton芯片的缓存容量是上一代的五倍,可实现更快的数据处理速度和更大的带宽——这两者对于智能体工作流至关重要。

该协议顺应了行业日益增长的趋势,即确保支持当前及下一代AI系统所需的基础设施。
亚马逊副总裁Nafea Bshara在4月24日的一篇博客文章中表示:“这不仅仅是芯片的问题,而是要为客户提供基础设施基础……以构建能够理解、预测并高效扩展到全球数十亿人的AI。”

Meta基础设施负责人Santosh Janardhan在声明中表示:“在扩展Meta AI愿景所需的基础设施时,多元化我们的计算资源是一项战略要务。扩展到Graviton使我们能够以符合自身规模所需的性能和效率,运行智能体AI背后的CPU密集型工作负载。”

该协议是过去几个月签署的众多协议之一,AI供应商正竞相确保下一代AI基础设施。
本月早些时候,OpenAI和Anthropic均扩大了与亚马逊的合作,以加速部署该科技巨头自主研发的Trainium芯片。
今年2月,Meta与AMD达成价值1000亿美元的芯片协议,并扩大了与英伟达的合作,以使用更多其芯片。
今年4月,这家Facebook母公司还扩大了与博通的合作,以支持针对AI特定应用的芯片设计与开发。

英文来源:

Sponsored by Google Cloud
Choosing Your First Generative AI Use Cases
To get started with generative AI, first focus on areas that can improve human experiences with information.
The deal is the latest in a spate of major chip pacts as tech giants race to scale up AI compute.
Meta entered into a new agreement to deploy millions of general-purpose chips from Amazon, as part of the social media giant’s AI expansion efforts.
Under the deal, Meta will gain access to AWS’s Graviton line of processors, which are specifically designed for agentic AI.
While tools such as large language models rely on GPUs for training, the rise of agentic AI is increasing demand for high-performance CPUs that support inference and compute-intensive tasks such as orchestration and memory management.
Amazon said its latest Graviton chips feature a cache five times larger than the previous generation, enabling faster data processing and greater bandwidth -- both key to agentic workflows.
The agreement joins growing industry momentum to secure the infrastructure needed to support both current and next-generation AI systems.
“This isn't just about chips; it's about giving customers the infrastructure foundation … to build AI that understands, anticipates and scales efficiently to billions of people worldwide,” Nafea Bshara, vice president at Amazon, said in an April 24 blog post.
“As we scale the infrastructure behind Meta's AI ambitions, diversifying our compute sources is a strategic imperative,” Santosh Janardhan, head of infrastructure at Meta, said in the statement. “Expanding to Graviton allows us to run CPU-intensive workloads behind agentic AI with the performance and efficiency we need at our scale.”
The deal is one of many signed over the past few months as AI vendors race to secure next-generation AI infrastructure.
Earlier this month, OpenAI and Anthropic both expanded partnerships with Amazon to ramp up deployment of the tech giant’s in-house Trainium chips.
In February, Meta made a chip deal with AMD worth $100bn, as well as an expanded deal with Nvidia to use more of its chips.
In April, the Facebook parent company also expanded its partnership with Broadcom to support the design and development of chips for AI-specific applications.

商业视角看AI

文章目录


    扫描二维码,在手机上阅读