CN

Based on Houmo.AI's self-developed CIM AI chip, the large model inference acceleration card is a standard half-height, half-length PCIe acceleration card that can be quickly deployed in PCs, all-in-ones, and servers. It supports both active and passive cooling modes to ensure stable operation of the device in different environments.

AI
100 to 256 TOPS @ INT8
Power Consumption
25 to 60W
Codec
Supports 16-way FHD
Interface
PCIE 4.0
AI
100 to 256 TOPS @ INT8
Power Consumption
25 to 60W
Codec
Supports 16-way FHD
Interface
PCIE 4.0