Cluster description
Hardware#
Compute nodes#
(Last update: 20/11/2023)
CPU nodes#
Nbr | CPU (Hyper-Thread) | RAM (GB ) |
---|---|---|
10 | 48 | 250 |
7 | 96 | 250 |
4 | 256 | 250 |
16 | 104 | 1000 |
1 | 80 | 1020 |
1 | 128 | 2060 |
(Last update: 19/09/2024)
GPU nodes#
Nbr | GPU | CPU | RAM (GB ) | Type | Disk /tmp | Disk /bank |
---|---|---|---|---|---|---|
2 | 2 | 80 | 512 | NVIDIA l40s | ~64 GB | 3.5 TB |
1 | 2 | 40 | 128 | NVIDIA k80 | 32 GB | - |
GPU Instance Profiles#
⚠️ The values below can change. To check the current situation:
sinfo -Ne -p gpu --format "%.15N %.4c %.7m %G"
Profile Name | GPU Memory | Number of Instances Available |
---|---|---|
l40s | 24GB | 4 |
k80 | 48GB | 2 |
Storage#
DELL FluidFS:
- 2.5 PB of project storage
- 7 TB of scratch
Software layer#
The cluster is managed by Slurm (version 17.05).
Scientific software and tools are available through Environment Modules and are mainly based on Conda packages or Singularity images.
Operating System: Ubuntu (cluster) and sometimes CentOS
Around the cluster management: Zabbix, Netbox, VMware ESX.
Deployment and configuration are powered by Ansible and GitLab.