SLURM Partitions
Partitions¶
A partition can be specified via the appropriate sbatch option, e.g.:
#SBATCH --partition=compute
However on eRI there is generally no need to do so, since the default behaviour is that your job will be assigned to the most suitable partition(s) automatically, based on the resources it requests, including particularly its memory/CPU ratio and time limit.
If you do specify a partition and your job is not a good fit for that partition then you may receive a warning, please do not ignore this. E.g.:
sbatch: `hugemem` is not the most appropriate partition for this job, which would otherwise default to `compute`. If you believe this is incorrect then contact support and quote the Job ID number.
Partition | Max Walltime | Nodes | CPUs/Node | Available Mem/CPU | Available Mem/Node | Description |
---|---|---|---|---|---|---|
compute | 14 days | 6 | 256 | 3.7 GB | 950 GB | Default partition. |
gpu | 14 days | 1 | 96 | 4.8 GB | 470 GB | A100. |
hugemem | 14 days | 2 | 256 | 14.9 GB | 3800 GB | Very large amounts of memory. |
interactive | 60 days | 3 |
8 | 1.8 GB | 14.5 GB | Partition for interactive jobs. |
vgpu | 60 days | 4 | 32 | 13 GB | 418 GB | Virtual GPUs. |
Quality of Service¶
Orthogonal to the partitions, each job has a "Quality of Service", with
the default QoS for a job being determined by the allocation class of
its project. There are other QoSs which you can select with the
--qos
option:
Interactive¶
Specifying --qos=interactive
will give a very high priority interactive job.
Requesting GPUs¶
|------------------------|------------------------------------------------------------------------------------------------------------------------------------------------|
| GPU code | GPU type |
| A100 (gpu
partition) | NVIDIA Tesla A100 PCIe 40GB cards |
| vgpu | NVIDIA A10 GPGPU, PCIe 24GB cards |
The default GPU type is A100. The vgpu partition contains four virtualised compute nodes, each with a single NVIDIA A10 GPGPU, PCIe 24GB cards.
To request for the A100 GPU:
#SBATCH --partition gpu
#SBATCH --gpus-per-node 1 # GPU resources required per node
To request for vGPUs, use instead:
#SBATCH --partition vgpu
#SBATCH --gpus-per-node 1