<p>➀ Gelsinger discusses the difference between throughput computing and scalar computing, highlighting NVIDIA's focus on GPU-based computing for AI.</p><p>➁ He argues that GPUs are overpriced for AI inference, suggesting a need for more cost-effective solutions.</p><p>➂ Gelsinger hints at the potential for 'NPUs' as a more efficient alternative for AI inference.</p>
Related Articles
- Frore's new LiquidJet coldplates are equipped to handle the spiralling power demands of future AI GPUs — built to handle up to 4.4Kw TDPs, solution could be deployed in power-hungry Feynman data centersabout 22 hours ago
- Softbank, MS reported to be in talks with Wayve to raise $2bn2 days ago
- CSP capex $420bn this year; $520bn next year2 days ago
- Inside the AI accelerator arms race: AMD, Nvidia, and hyperscalers commit to annual releases through the decade3 days ago
- Big Deal Days May Be Over, But These GeForce RTX Cards Are Still On Sale4 days ago
- Bride surprises new husband with an RTX 5090 on wedding day — Chinese number slang reveals surprise gift5 days ago
- Microsoft deploys world's first 'supercomputer-scale' GB300 NVL72 Azure cluster — 4,608 GB300 GPUs linked together to form a single, unified accelerator capable of 92.1 exaFLOPS of FP4 inference6 days ago
- China issues port crackdown on all Nvidia AI chip imports, says report — enforcement teams deployed to quash smuggling and investigate data center hardware, targeting H20 and RTX 6000D shipments6 days ago
- Singapore company allegedly helped China smuggle $2 billion worth of Nvidia AI processors, report claims — Nvidia denies that the accused has any China ties, but a U.S. investigation is underway6 days ago
- Most Read – Waymo vehicles, Qualcomm-Arduino, Fujitsu AI6 days ago