| RAM | LPDDR4 |
|---|---|
| Wireless Type | Bluetooth |
OVERVIEW
(as of Nov 24, 2025 17:30:27 UTC - Details)
Additional Information
| Customer Reviews |
3.9 out of 5 stars |
|---|---|
| Best Sellers Rank |
|
Warranty & Support
Product description
The Coral M.2 Accelerator with Dual Edge TPU is an M.2 module (E-key) that includes two Edge TPU ML accelerators, each with their own PCIe Gen2 x1 interface.
The Edge TPU is a small ASIC designed by Google that accelerates TensorFlow Lite models in a power efficient manner: each one is capable of performing 4 trillion operations per second (4 TOPS), using 2 watts of power—that's 2 TOPS per watt. For example, one Edge TPU can execute state-of-the-art mobile vision models such as MobileNet v2 at almost 400 frames per second. This on-device ML processing reduces latency, increases data privacy, and removes the need for a constant internet connection.
With the two Edge TPUs in this module, you can double the inferences per second (8 TOPS) in several ways, such as by running two models in parallel or pipelining one model across both Edge TPUs.

Requirements
A computer with one of the following operating systems:
Linux: 64-bit version of Debian 10 or Ubuntu 16.04 (or newer), and an x86-64 or ARMv8 system architectureWindows: 64-bit version of Windows 10, and x86-64 system architecture
All systems require support for MSI-X as defined in the PCI 3.0 specification
At least one available Mini PCIe or M.2 module slot
Python 3.6-3.9

Edge TPU ML accelerator
2x Google Edge TPU ML accelerator
○ 8 TOPS total peak performance (int8)
○ 2 TOPS per watt
Integrated power management2x PCIe Gen2 x1 interface (one per Edge TPU)M.2-2230-D3-E moduleSize: 22.0 x 30.0 x 2.8 mmOperating temp: -40 to +85 °C
Dimensions 22.00 x 30.00 x 2.80 mm Weight 2.5 g Hardware interface M.2 E key (M.2-2230-D3-E) Serial interface Two PCIe Gen2 x1 DC supply 3.3 V +/- 10 % Operating temperature -40 to +85 °C Relative humidity 0 to 90% (non-condensing) Shock 100 G, 11 ms (persistent) 1000 G, 0.5 ms (stress) 1000 G, 1.0 ms (stress) Vibration (random/sinusoidal) 0.5 Grms, 5 - 500 Hz (persistent); 3 Grms, 5 - 800 Hz (stress) Countries 2 Unit shipped as a component. Final system certification/compliance to be done by the customer. ESD 1 kV HBM, 250 V CDM
Performs high-speed ML inferencing: Each Edge TPU coprocessor is capable of performing 4 trillion operations per second (4 TOPS), using 2 watts of power. For example, it can execute state-of-the-art mobile vision models such as MobileNet v2 at almost 400 FPS, in a power-efficient manner. With the two Edge TPUs in this module, you can double the inferences per second (8 TOPS) in several ways, such as by running two models in parallel or pipelining one model across both Edge TPUs.
Works with Debian Linux and Windows: Integrates with Debian-based Linux or Windows 10 systems with a compatible card module slot.
Supports TensorFlow Lite: No need to build models from the ground up. TensorFlow Lite models can be compiled to run on the Edge TPU.
Supports AutoML Vision Edge: Easily build and deploy fast, high-accuracy custom image classification models to your device with AutoML Vision Edge. Description
Customers say
Customers give positive feedback about the single board computer's object detection capabilities, with one noting that the dual Edge TPU handles real-time detection smoothly. However, the functionality and setup experience receive mixed reviews, with some finding it easy to set up while others report issues.
(as of Nov 24, 2025 17:30:27 UTC - Details)
RELATED PRODUCTS
REVIEWS







Reviews
There are no reviews yet.