
Alibaba's XuanTie C950 Sets a RISC-V World Record and Runs AI Agents Natively
Alibaba DAMO Academy's XuanTie C950 5nm RISC-V processor hits 3.2GHz and sets a new benchmark record, becoming the first RISC-V chip to natively support 100B-parameter AI models.
RISC-V Reaches a New High-Water Mark
The argument that RISC-V is a serious architecture for high-performance computing just got significantly stronger. On March 24, 2026, at the Xuantie RISC-V Ecosystem Conference in Shanghai, Alibaba's DAMO Academy unveiled the XuanTie C950 — a 64-bit RISC-V core built on TSMC's 5nm process node that achieves a clock speed of 3.2GHz and posts a benchmark score of 70 on standard RISC-V performance evaluations. That is a new world record for a RISC-V core, and the margin of improvement over its predecessor is not incremental: the C950 is more than three times faster than the C920, with memory bandwidth that is more than four times higher.
For the embedded systems, SBC, and edge AI communities, this matters in a very direct way. RISC-V has long promised a path to open-architecture silicon that could compete with ARM and x86 on performance while offering the freedom of an unencumbered instruction set. The XuanTie C950 makes good on that promise at a level that was not achievable 18 months ago.
The Architecture Behind the Record
The C950's performance gains come from a combination of architectural depth and manufacturing advances. The core implements 8-wide instruction decoding — handling eight instructions per clock cycle — with a 16-stage pipeline and an out-of-order execution window that can hold more than 1,000 in-flight instructions simultaneously. That window depth is what allows the processor to find and exploit instruction-level parallelism in demanding workloads, keeping execution units busy even when the instruction stream has dependencies.
The memory subsystem has been redesigned with particular attention to latency. The L1 data cache achieves a 4-cycle load-to-use latency — ultralow by the standards of any modern processor architecture, and critical for the data-intensive workloads that edge AI inference demands. A per-core private L2 cache supports large capacity configurations, and the chip's two-stage virtual memory address translation supports modern RISC-V virtualization extensions that make it suitable for running multiple workloads simultaneously on embedded server hardware.
First RISC-V Chip to Natively Support 100B-Parameter AI Models
The headline capability that distinguishes the C950 from all previous RISC-V processors is its self-developed AI acceleration engine, which DAMO Academy says enables the chip to natively support large language models with hundreds of billions of parameters — specifically calling out Qwen3 and DeepSeek V3 as supported models.
That claim is significant. Until the C950, running a frontier-scale AI model on RISC-V hardware required significant software-layer work, quantization, and performance compromises. The C950's native support changes the calculus for developers looking to deploy edge AI inference on RISC-V silicon, opening a path to deploying modern AI agents in environments where ARM and x86 licensing costs or supply chain constraints make alternative architectures attractive.
SoC Configurations and Target Markets
The C950 is designed to be deployed in SoCs with up to eight cores per cluster, targeting cloud computing, edge computing, and AI computing workloads. A companion chip, the C925, was also announced at the same conference for different performance and power profiles.
For the maker and embedded systems community, the significance of the C950 lies in what it represents for future RISC-V SBCs and edge AI devices. As foundries integrate the C950 into new SoC designs over the coming 12-18 months, it will create a generation of single-board computers and compact AI inference boxes with genuine high-performance RISC-V silicon at their core — advancing the broader RISC-V ecosystem toward the full-stack alternative to ARM that the open hardware community has been building toward for years.
Sources: [CNX Software](https://www.cnx-software.com) (March 25, 2026), [The Register](https://www.theregister.com) (March 25, 2026), [Technology.org](https://www.technology.org) (March 24, 2026), [Digitimes](https://www.digitimes.com) (March 24, 2026)
