Scaling Out Data Centers with 400G InfiniBand Smart Switches
The combination of faster servers, high-performance storage, and increasingly complex computing applications is leading to The demand for data bandwidth is spiraling upwards. With the deployment of servers using next-generation processors, high-performance computing (HPC) environment and enterprise data centers (EDC) will require Mellanox's next-generation HDR InfiniBand All bandwidth provided by high-speed intelligent switches.
The N VI D IA Quantum-2-based QM9700 and QM9790 switch systems deliver an unprecedented 64 ports of 400Gb/s InfiniBand per port in a 1U standard chassis design. A single switch carries an aggregated bidirectional throughput of 51.2 terabits per second (Tb/s),with a landmark of more than 66.5 billion packets per second (BPPS) capacity. Supporting the latest N VI D IA high-speed interconnect 400Gb/s technology
Port type
32 OSFP
Throughput
51.2Tb/s
CPU
x86 Coffee Lake i3
Operating System
MLNX-OS
Equipment height
1U
Support rate
400G NDR/200G HDR/100G EDR/56G FDR/40G QDR
Wind direction
power-to-connector (P2C)
System Memory
Single 8GB, 2,666 mega transfers per second (MT/s), DDR4 SO-DIMM
Management Style
Non management
Dimensions (height x width x depth)
1.7” (H) x 17.2” (W) x26” (D), 43.6mm (H) x 438mm (W) x 660mm (D)