Skip to content

DRP Cluster

Specifications

The cluster consists of 64 nodes connected via 56Gb FDR Infiniband. Each node has two eight-core 2.6 GHz Intel Xeon E5-2650 processors and 256GB of system memory.

Accessing the System

Note: Not all projects have access to the cluster. Job submissions to Slurm may be rejected even if access to the front-end node is authorized.

Running on the cluster first requires connecting to one of its front end nodes drpfen01 or drpfen02. These machines are accessible from the landing pads.

HyperThreading

By default Slurm will assign 32 processes to each node. The 2x factor is the result of hyperthreading being enabled. Some applications may benefit from hyperthreading, others will not. Initial testing indicates that running one process per physical core yields the best performance.

Passing the '--bind-to-core' option to OpenMPI will specify process affinity to cores, and along with Slurm options '-N', to specify the number of nodes, and '-n', to specify the number of processes, the physical cores will each run a single process. For example passing '-N 2 -n 32' to Slurm and '--bind-to-core' to mpirun will result in 32 processes running on 32 cores on two nodes.

Alternatively, passing '-c 2' to srun will assign two cores per process and will prevent execution of more than 16 processes per node.

Building Executables

MVAPICH2 and OpenMPI compiler wrappers are available via the 'mpi' modules. Please refer to Modules for use of modules and their interactions with Slurm.

Software/Libraries

Compilers+MPI

Supported

GCC

openmpi

4.7.4

1.8.8

1.10.6

2.0.2

2.0.3

2.1.0

2.1.1

3.0.0

Submitting and Managing Jobs

Partitions

Name Time Limit (hr) Max Nodes
debug 1 2
drp 6 unlimited

Example job submission scripts

Please see Slurm for more info.

Notes