➀ The new Google Cloud TPU v6e Trillium was showcased at SC24 without its heatsinks, highlighting Google's custom AI accelerator with improved performance, interconnect, and memory. ➁ The v6e chip offers double the HBM memory from 16GB to 32GB, doubling the bandwidth, and improves INT8 and bfloat16 performance by around 4.6-4.7x compared to the v5e version. ➂ The new generation quadruples the interconnect bandwidth from two 100Gbps links to four 200Gbps links, and the inter-chip interconnect bandwidth more than doubles.
Related Articles
- Google unveils AlphaChip AI-assisted chip design technology — chip layout as a game for a computerabout 1 year ago
- Google Is Building A Massive $15 Billion Gigawatt-Scale AI Data Center Hub In India23 days ago
- AI adoption far outpaces that of the early Internet — report sheds light on worldwide AI penetration and usage patterns28 days ago
- Google terminates 200 AI contractors — 'ramp-down' blamed, but workers claim questions over pay and job insecurity are the real reason behind layoffsabout 2 months ago
- Thank You For the Supercomputers Google Predictions for the Next Phase of AI at Hot Chips 2025about 2 months ago
- Hardware Developer Internship At Stealth Mode Startup In Mumbai3 months ago
- Google's AI could be tricked into enabling spam, revealing a user's location, and leaking private correspondence with a calendar invite — 'promptware' targets LLM interface to trigger malicious activity3 months ago
- AI Predicts Thermoelectric Metal3 months ago
- HP Dimension with Google Beam eyes 3D video meetings4 months ago
- Musk asserts AI will make search redundant in comment on Google Search share dipping below 90%5 months ago