SLURM at ABiMS
The Partitions and resource limits#
To request access to partitions on demand such as the bigmem node or the gpu nodes, please submit your request on the community support platform: support.abims@sb-roscoff.fr specifying your login and project name.
⚠️ The values below can change. To check the current situation:
scontrol show partition
sacctmgr list qos format=Name,Priority,MaxTRESPU%20
Partitions | Time out | Max resources / user | Purpose |
---|---|---|---|
fast |
<= 24 hours | cpu=300, mem=1500GB | Default - Regular jobs |
long |
<= 30 days | cpu=300, mem=1500GB | Long jobs |
bigmem |
<= 60 days | mem=2500GB | On demand - For jobs requiring a lot of RAM |
clc |
<= 30 days | cpu=300, mem=1500GB | On demand - Access CLC Assembly Cell |
gpu |
<= 10 days | cpu=300, mem=1500GB | On demand - Access GPU cards |
The default values#
Param | Default value |
---|---|
--mem |
2GB |
--cpus |
1 |
ABiMS Cluster Computing nodes#
⚠️ The values below can change. To check the current situation:
sinfo -Ne --format "%.15N %.4c %.7z %.7m" -S c,m,N | uniq
(Last update: 20/11/2023)
CPU nodes#
Nbr | CPU (Hyper-Thread) | RAM (GB ) |
---|---|---|
10 | 48 | 250 |
7 | 96 | 250 |
4 | 256 | 250 |
16 | 104 | 1000 |
1 | 80 | 1020 |
1 | 128 | 2060 |
(Last update: 19/09/2024)
GPU nodes#
Nbr | GPU | CPU | RAM (GB ) | Type | Disk /tmp | Disk /bank |
---|---|---|---|---|---|---|
2 | 2 | 80 | 512 | NVIDIA l40s | ~64 GB | 3.5 TB |
1 | 2 | 40 | 128 | NVIDIA k80 | 32 GB | - |
GPU Instance Profiles#
⚠️ The values below can change. To check the current situation:
sinfo -Ne -p gpu --format "%.15N %.4c %.7m %G"
Profile Name | GPU Memory | Number of Instances Available |
---|---|---|
l40s | 24GB | 4 |
k80 | 48GB | 2 |