06/10/2025, 09:46 PM UTC
美光开始为下一代人工智能交付HBM4内存Micron Begins Shipping HBM4 Memory for Next-Gen AI
➀ 美光宣布开始交付HBM4内存,采用2048位接口,每堆栈带宽达2.0TB/s,较HBM3E性能提升60%;
➁ 首批36GB堆栈基于1-beta工艺,集成内存自检(MBIST)技术,专为2026年下一代AI加速器设计;
➂ 大规模量产计划于2026年启动,未来或将与LPDDR内存组合使用,进一步扩展AI加速器的存储容量。
➀ Micron has commenced shipments of HBM4 memory, delivering 2.0TB/s per stack with a 2048-bit interface, a 60% performance boost over HBM3E;
➁ Initial 36GB stacks target next-gen AI accelerators, built on Micron's 1-beta process with advanced memory testing (MBIST) for reliability;
➂ Full production ramp is planned for 2026, aligning with next-gen AI hardware releases, while future designs may combine HBM with LPDDR for expanded memory capacity.
---
本文由大语言模型(LLM)生成,旨在为读者提供半导体新闻内容的知识扩展(Beta)。