
Radxa AICore DX-M1M Delivers 25 TOPS of Edge AI in an M.2 Module That Sips Just 3 Watts
Radxa and DEEPX release a tiny M.2 2242 AI accelerator with 25 TOPS INT8 performance at 3W, compatible with Raspberry Pi 5 and x86 hosts, starting at $85.
Serious AI in a Gumstick Package
The edge AI accelerator market just got a compelling new entrant. Radxa, in partnership with semiconductor startup DEEPX, has released the AICore DX-M1M — an M.2 2242 module that delivers 25 TOPS of INT8 inference performance while drawing just 3 watts of power. At $85, it brings meaningful on-device AI capability to single-board computers and compact systems without requiring a dedicated GPU or a hefty power supply.
The module slots into a standard M.2 Key M connector via PCIe Gen3 and includes 1 GB of LPDDR4X memory onboard. It is compatible with both x86 and Arm host platforms, which means it works with everything from a Raspberry Pi 5 to Radxa's own ROCK series boards to standard Intel or AMD mini PCs. For the growing community of developers building local AI inference pipelines — whether for computer vision, natural language processing, or sensor fusion — the AICore DX-M1M offers a plug-and-play path to serious performance.
The Performance-Per-Watt Story
What makes this module stand out is not raw TOPS — NVIDIA's Jetson Orin Nano delivers more — but the performance-per-watt ratio at this price point. At 25 TOPS from 3 watts, the AICore DX-M1M achieves roughly 8.3 TOPS per watt, which is competitive with modules costing significantly more. For always-on edge deployments where thermal management and power consumption are real constraints — think security cameras, industrial sensors, or home automation hubs — that efficiency matters more than peak throughput.
DEEPX, the chip designer behind the module, uses a proprietary neural processing architecture optimized for transformer and CNN workloads. The company claims the DX-M1 silicon achieves its efficiency through aggressive quantization support and a memory-optimized dataflow engine that minimizes off-chip bandwidth requirements.
What You Can Build With It
The practical applications are immediately interesting. Pair the AICore DX-M1M with a Raspberry Pi 5 and you have a local AI inference station that can run real-time object detection, speech recognition, or small language model inference without sending data to the cloud. For privacy-conscious smart home builders, that is a significant capability — local processing means your camera feeds and voice commands never leave your network.
At the commercial edge, the module is sized for embedded systems that need AI inference in space-constrained environments. Robotics platforms, drone controllers, and point-of-sale kiosks can all benefit from 25 TOPS of on-device intelligence without the power or cooling requirements of a full GPU. For makers and professionals alike, the AICore DX-M1M makes a strong case that you no longer need to choose between powerful AI and a tiny power budget.
Sources: CNX Software (March 21, 2026), LinuxGizmos (March 2026)
