Skip to content

SLURM at ABiMS

The Partitions and resource limits

To request access to partitions on demand such as the bigmem node or the gpu nodes, please submit your request on the community support platform: support.abims@sb-roscoff.fr specifying your login and project name.

⚠️ The values below can change. To check the current situation:

scontrol show partition
sacctmgr list qos format=Name,Priority,MaxTRESPU%20
Partitions Time out Max resources / user Purpose
fast <= 24 hours cpu=300, mem=1200GB Default - Regular jobs
long <= 30 days cpu=300, mem=1200GB Long jobs
bigmem <= 60 days mem=2500GB On demand - For jobs requiring a lot of RAM
clc <= 30 days cpu=300, mem=1200GB On demand - Access CLC Assembly Cell
gpu <= 10 days cpu=300, mem=1200GB On demand - Access GPU cards

The default values

Param Default value
--mem 2GB
--cpus 1

ABiMS Cluster Computing nodes

⚠️ The values below can change. To check the current situation:

sinfo -Ne --format "%.15N %.4c %.7z %.7m" -S c,m,N | uniq

(Last update: 04/05/2022)

CPU nodes

Nbr CPU (Hyper-Thread) RAM (GB )
10 48 250
7 96 250
4 256 250
1 80 1020
1 128 2060

(Last update: 04/05/2022)

GPU nodes

Nbr GPU CPU RAM (GB ) Type Disk /tmp
1 2 40 128 NVIDIA k80 32 GB
GPU Instance Profiles

⚠️ The values below can change. To check the current situation:

sinfo -Ne -p gpu --format "%.15N %.4c %.7m %G"
Profile Name GPU Memory Number of Instances Available
k80 24GB 2