Elon Musk's xAI Doubles Colossus AI Supercomputer Power to 200K NVIDIA Hopper AI GPUs
10/31/2024, 03:10 AM UTC
埃隆·马斯克的xAI将Colossus AI超级计算机的GPU功率翻倍至20万个NVIDIA Hopper AI GPUElon Musk's xAI to double Colossus AI supercomputer power to 200K NVIDIA Hopper AI GPUs
➀ 埃隆·马斯克的xAI将Colossus AI超级计算机的GPU数量从10万个升级到20万个NVIDIA Hopper AI GPU;➁ Colossus是全球最大的AI超级计算机,用于训练xAI的Grok大型语言模型,并为X Premium订阅用户提供聊天机器人;➂ Colossus超级计算机集群仅用122天就完成,这一成就得到了NVIDIA首席执行官黄仁勋的认可。➀ Elon Musk's xAI is upgrading its Colossus AI supercomputer from 100,000 to 200,000 NVIDIA Hopper AI GPUs; ➁ Colossus is the world's largest AI supercomputer, used for training xAI's Grok LLMs and chatbots for X Premium subscribers; ➂ The Colossus supercomputer cluster was completed in just 122 days, a feat recognized by NVIDIA CEO Jensen Huang.Elon Musk's xAI startup is in the process of upgrading its Colossus AI supercomputer cluster, doubling the GPU power from 100,000 NVIDIA Hopper AI GPUs to an impressive 200,000.
Colossus, recognized as the world's largest AI supercomputer, is instrumental in training xAI's Grok family of large language models (LLMs) and provides chatbots for X Premium subscribers.
The Colossus facility, completed in just 122 days, is a remarkable achievement that has been acknowledged by NVIDIA CEO Jensen Huang, who referred to Elon Musk as 'superhuman' for this feat.
NVIDIA has highlighted its partnership with xAI, noting that the state-of-the-art supercomputer was built in an unprecedented 122 days, significantly faster than the typical timeframe for such systems.
During the training of the large Grok model, Colossus has demonstrated unparalleled network performance, maintaining high data throughput and low latency, which is critical for AI workloads.
Elon Musk himself praised the Colossus as the most powerful training system in the world, and an xAI spokesperson emphasized the importance of NVIDIA's Hopper GPUs and Spectrum-X in enabling massive-scale AI model training.
---
本文由大语言模型(LLM)生成,旨在为读者提供半导体新闻内容的知识扩展(Beta)。