The operation of local HPC resources and especially of the Scientific Compute Cluster (SCC) at the GWDG is achieved by the transparent integration of different systems into a joint operating concept for the basic supply of the Max-Planck Institutes and the university. This includes a uniform software management, a shared batch management environment, cross-system monitoring and accounting, and cross-system file systems. Thus, synergies are achieved through the integration of different system generations and special-purpose systems (e.g. GPU clusters). Users will find a uniform environment on all HPC systems, while at the same time individual application environments are supported. Nonetheless, this results in a highly heterogeneous cluster, which requires good knowledge of the architecture differences and highly tuned run scripts.

The extensive documentation, the FAQ and the first steps can be found online. If you are using our systems for your research, please also refer to the acknowledgement guidelines on using our systems for your research.

System Overview

7 Racks

4 Racks at the Faßberg are cold water cooled. The two GPU nodes at the MDC are air cooled. One CPU rack at the MDC is warm water cooled.

410 Compute Nodes

The SCC cluster contains a mixture of Xeon Platinum 9242, Broadwell Xeon E5-2650 v4, Haswell Xeon E5-4620 v3, Broadwell Xeon E5-2650 v4 and Xeon Gold 6252 CPUs

18.376 CPU Cores

Distributed over all compute and GPU nodes.

100 GBit/s & 56 Gbit/s Interconnect

The interconnect for the system at the Faßberg is run with 56GBit/s FDR Infiniband, and the MDC system runs with 100GBit/s Omni-Path.

1.4 TiB GPU RAM

Across all GPU nodes, 1.4 TiB of GPU-memory are available

99 TB RAM

Across all 410 nodes, 88 TB of memory are available

5,2 PiB Storage

The BeeGFS storage in the MDC system consists of 2 PiB HDD and 100 TiB SSD and 130TiB HDD at the Faßberg system. The StorNext home file system is around 3 PiB large.

22+ PiB Tape Storage

Backup storage is provided by Quantum Scalar Tape Libraries. To ensure reliable backups, these are stored at two different locations

Icons made by Freepik and phatplus

Node Architectures

Further details of the hardware can be found in the HPC-Documentation for the CPU- and GPU-Partitions of the SCC.