06/10/2025, 08:06 AM UTC
芯原股份超低功耗NPU为移动端设 备大语言模型推理提供超40TOPS算力VeriSilicon’s Ultra-Low Energy NPU Provides Over 40 TOPS for On-Device LLM Inference in Mobile Applications
➀ 芯原股份NPU IP支持大语言模型端侧推理,算力超40TOPS;
➁ 面向移动应用优化能效设计;
➂ 满足生成式AI在终端设备中日益增长的需求
➀ VeriSilicon's NPU IP achieves >40 TOPS for on-device LLM inference;
➁ Targets mobile applications with optimized energy efficiency;
➂ Addresses growing demand for generative AI in edge devices
---
本文由大语言模型(LLM)生成,旨在为读者提供半导体新闻内容的知识扩展(Beta)。