<p>➀ Qualcomm launched AI inference accelerator cards AI200 and AI250, targeting generative AI workloads like LLM/LMM with high performance and cost efficiency at rack scale; </p><p>➁ AI200 emphasizes LPDDR memory capacity (768GB per card) for low TCO, while AI250 adopts near-memory computing to boost bandwidth 10x and reduce power; </p><p>➂ Both feature liquid cooling, PCIe/Ethernet scalability, security protections, and a 160kW power design, supported by an optimized AI software stack for seamless deployment, with commercial availability in 2026 and 2027.</p>