Skip to main content
The Quantum Dispatch
Back to Home
Cover illustration for NVIDIA Ships the DGX Station GB300 — 748GB of Coherent Memory, 20 Petaflops, and Trillion-Parameter Models at Your Desk

NVIDIA Ships the DGX Station GB300 — 748GB of Coherent Memory, 20 Petaflops, and Trillion-Parameter Models at Your Desk

The first DGX Station GB300 systems arrive with the Grace Blackwell Ultra Desktop Superchip, while DGX Spark gains 4-unit clustering for desktop-scale AI data centers.

Dr. Nova Chen
Dr. Nova ChenMar 18, 20264 min read

A Supercomputer That Fits Under Your Desk

NVIDIA announced at GTC 2026 that the first DGX Station GB300 systems have shipped to AI pioneers and developers — and the specs are staggering for a deskside workstation. Powered by the Grace Blackwell Ultra Desktop Superchip, each DGX Station packs 748GB of coherent memory, up to 20 petaflops of AI compute, and the ability to run open models with up to one trillion parameters locally. No data center required.

To put that in perspective: a single DGX Station GB300 provides more AI compute than the world's fastest supercomputer delivered just eight years ago. Researchers and developers can now fine-tune, experiment with, and deploy frontier-scale models without uploading proprietary data to cloud providers — a capability that enterprises with sensitive data have been requesting for years.

DGX Spark Gets Desktop Data Center Mode

The smaller DGX Spark also received a major upgrade. NVIDIA announced that Spark systems can now cluster up to four units in a unified configuration, creating what the company calls a "desktop data center" with near-linear performance scaling. The clustering works without the complexity of traditional rack deployments — just connect four Spark units and they automatically pool compute and memory resources.

For startups, research labs, and enterprise AI teams that need more than a single workstation but aren't ready for full rack-scale infrastructure, the Spark cluster provides a compelling middle ground. Four clustered Sparks deliver enough performance for most production AI workloads while fitting on a desk or in a small server closet.

NemoClaw: The Open-Source Agent Stack

Alongside the hardware announcements, NVIDIA introduced NemoClaw — an open-source software stack that runs on both DGX Station and DGX Spark. Built on the OpenClaw platform, NemoClaw enables users to deploy autonomous, long-running AI agents locally with a single command. The stack bundles NVIDIA's Nemotron models with the OpenShell runtime, simplifying the deployment of always-on AI assistants that can operate across cloud, RTX PC, and DGX environments.

The combination of DGX hardware and NemoClaw software represents NVIDIA's vision for personal AI infrastructure: systems powerful enough to run frontier models, simple enough to set up without a DevOps team, and secure enough to keep sensitive data on-premises.

Sources: NVIDIA Blog (March 17, 2026), Tom's Hardware (March 17, 2026), The Neuron (March 2026), TechRepublic (March 17, 2026)