❶ Elon Musk has shared photos of the Dojo D1 Supercomputer cluster, which is said to be equivalent to 8,000 Nvidia H100 GPUs for AI training. ❷ The Dojo D1 uses a system-on-wafer design and is manufactured by TSMC, aiming to handle AI machine learning and video training. ❸ Musk plans to have 90,000 Nvidia H100 chips, 40,000 Nvidia AI4, and Dojo D1 wafers running by the end of 2024, showcasing his significant investment in AI technology.
Related Articles
- OpenAI execs mused over Cerebras acquisition in 2017 — to mitigate predicted Nvidia supply woes11 months ago
- Elon Musk spent roughly $10 billion on AI training hardware in 202412 months ago
- Optimus delayed3 months ago
- Not So Magnificent7 months ago
- Tesla cars now drive themselves after being made, to loading docks without human intervention9 months ago
- ASUS chairman: we are working on a humanoid robot, will fight Elon Musk's Tesla Optimus robot10 months ago
- Top Ten (less 4) Elon Businesses11 months ago
- Elon Musk confirms if Tesla is making a smartphone to rival Apple and Google11 months ago
- Elon Musk's xAI to double Colossus AI supercomputer power to 200K NVIDIA Hopper AI GPUs12 months ago
- Elon Musk xAI Colossus AI supercomputer with 100,000 NVIDIA H100 AI GPUs gets in-depth look12 months ago