84-28.22537633

HPC Reference Architecture

 

1. HPC – High Performance Computing FatTwin with Omni-Path :

            This HPC RA utilizes Supermicro's high density FatTwin server along with Supermicro's Omni-Path switch in a
Fat Tree topology. This design can achieve Petaflops of computational power, and along with large memory bank,
it can undertake the most demanding HPC tasks. The premium FatTwin system offers high density compute
performance, storage capacity and power efficiency.

HPC Rack - FatTwin

Compute Power:

  • System: SYS-F619P2-RT (4U 8Nodes)
  • CPU: Dual Intel Xeon Scalable Processors, up to 56 Cores/ Node
    (3584 cores/ 42U Rack)
  • Memory: Up to 1.5TB ECC 3DS LRDIMM, 1.5TB ECC RDIMM, DDR4
    up to 2666MHz
  • Ultimate Scalability: 64 Compute Nodes per 42U-Rack, 48 1U standalone
    switches cansupport up to 1,536 compute nodes!

Networking:

  • Switch: Intel 100G 48-port Omni-Path TOR switch with management card.
    High switching capacity of 9.6 Tb/s total fabric bandwidth
  • L 2/3 Switch: 1/10 Gb Ethernet Superswitch, 48 x 1 Gbps and 4 x SPF+
    10 Gbps Ethernet ports
  • Cost Efficiency: The FatTwin architecture makes each HPC cluster extremeley
    cost efficient!

Cooling:

  • RDHx (rear Door Heat Exchangers) chiller doors for highly efficient cooling,
    up to 75kW coolingcapacity per rack
FatTwin 64 Nodes

 

 

2. Reference Architecture – TwinPro plus Dragonfly/InfiniBand :

          This design utilizes the speed and efficiency of the TwinPro along with Mellanox Infiniband networking in a
dragonfly topology.The dragonfly topology minimizes network diameter and maximizes bandwidth allowing
for a speedy and robust MPI system to be built on top.

HPC Rack - TwinPro

Compute Power:

  • System: SYS-2028TP-HTR (2U 4Nodes, total 64 Nodes/ 42U Rack)
  • CPU: Intel Xeon Processor E5-2600 v4/v3 family, Up to 44 cores/node
    (2,816 cores per 42U Rack)
  • Memory: Up to 2TB† ECC 3DS LRDIMM, 512GB ECC RDIMM, and 64 GB
    DDR4 LRDIMM-2666.

Networking:

  • Switch: Mellanox InfiniBand FDR/ EDR/HDR, 1 U Switch, 36 QSFP28 ports
  • Extremely High Bandwidth: The Dragonfly architecture along with InfiniBand
    switches ensure lowlatency and the maximum effective fabric bandwidth by
    eliminating congestion hot spots!

Cooling:

  • Highly efficient DCLC (Direct Contact Liquid Cooling) options available, 35kW+ cooling
    capacity per rack
TwinPro 64 Nodes