System Status

The tables below are updated every few minutes to show the current status of our supercomputing, data analysis, visualization resources, and other specialized nodes and systems. If the tables indicate a system is unavailable, check your email for timely updates via our Notifier service. Scheduled maintenance downtimes are announced in advance in the Daily Bulletin as well as in Notifier emails.

Derecho node status and queue activity

Login Node Status Users Load
derecho1 UP 60 5.35 %
derecho2 UP 76 2.26 %
derecho3 UP 38 8.79 %
derecho4 UP 57 3.30 %
derecho5 UP 46 3.23 %
derecho6 UP 37 1.27 %
derecho7 UP 57 1.22 %
derecho8 UP 0 1.03 %
CPU Nodes
Offline 16 ( 0.6 %)
Running Jobs 1881 ( 75.6 %)
Free 591 ( 23.8 %)
GPU Nodes
Offline 2 ( 2.4 %)
Running Jobs 79 ( 96.3 %)
Free 1 ( 1.2 %)
Updated 9:15 pm MST Mon Dec 2 2024
Queue Jobs Running Jobs Queued Jobs Held Nodes Used Users
cpu 275 0 91 1875 108
gpu 13 13 72 78 11
system 0 0 0 0 -
hybrid 0 0 0 0 -
pcpu 4 0 0 8 1
pgpu 0 0 0 0 -
gpudev 1 0 0 1 1
cpudev 7 0 15 7 13
repair 0 0 0 0 -
jhub 0 0 0 0 -
Updated 9:15 pm MST Mon Dec 2 2024

Casper node status and queue activity

Login Node Status Users Load
casper-login1 UP 79 17.0 %
casper-login2 UP 138 157.3 %
HTC Nodes
1361 of 2232 CPUs in use ( 61.0 %)
Partially Allocated 46 ( 74.2 %)
Fully Allocated 15 ( 24.2 %)
Free 1 ( 1.6 %)
Large Memory Nodes
Partially Allocated 5 ( 62.5 %)
Free 3 ( 37.5 %)
GP100 Visualization Nodes
6 of 48 GPU sessions in use ( 12.5 %)
Partially Allocated 2 ( 25.0 %)
Free 6 ( 75.0 %)
L40 Visualization Nodes
0 of 36 GPU sessions in use ( 0.0 %)
Free 6 (100.0 %)
V100 GPU Nodes
44 of 56 GPUs in use ( 78.6 %)
Partially Allocated 8 ( 88.9 %)
Offline 1 ( 11.1 %)
A100 GPU Nodes
23 of 35 GPUs in use ( 65.7 %)
Partially Allocated 6 ( 54.5 %)
Fully Allocated 2 ( 18.2 %)
Offline 2 ( 18.2 %)
Free 1 ( 9.1 %)
H100 GPU Nodes
0 of 8 GPUs in use ( 0.0 %)
Offline 2 (100.0 %)
RDA Nodes
Partially Allocated 4 ( 80.0 %)
Free 1 ( 20.0 %)
JupyterHub Login Nodes
Partially Allocated 4 ( 44.4 %)
Fully Allocated 5 ( 55.6 %)
Updated 9:15 pm MST Mon Dec 2 2024
Queue Jobs Running Jobs Queued Jobs Held CPUs Used Users
htc 370 11 18 1361 68
vis 6 0 0 45 5
largemem 9 0 0 31 8
gpgpu 5 1 0 35 6
rda 12 0 0 54 5
tdd 2 0 0 20 1
jhublogin 285 0 0 553 285
system 0 0 2 0 1
h100 0 2 0 0 2
l40 0 0 0 0 -
S3011784 0 0 0 0 -
Updated 9:15 pm MST Mon Dec 2 2024

V100 node status

V100 Node State # GPUs # Used # Avail
casper36 free 4 3 1
casper08 free 8 1 7
casper29 full 4 4 0
casper30 full 8 8 0
casper31 full 8 8 0
casper09 offline 4 0 4
casper24 full 8 8 0
casper25 full 4 4 0
casper28 full 8 8 0
Updated 9:15 pm MST Mon Dec 2 2024
A100 Node State # GPUs # Used # Avail
casper18 full 1 1 0
casper19 offline 1 0 1
casper21 free 1 0 1
casper38 full 4 4 0
casper39 down 4 0 4
casper40 full 4 4 0
casper41 free 4 2 2
casper42 full 4 4 0
casper43 free 4 2 2
casper44 full 4 4 0
casper37 job-busy 4 2 2
Updated 9:15 pm MST Mon Dec 2 2024
H100 Node State # GPUs # Used # Avail
casper57 offline 4 0 4
casper58 offline 4 0 4
Updated 9:15 pm MST Mon Dec 2 2024

GLADE file spaces and Campaign Storage

Individual files are removed from the /glade/scratch space automatically if they have not been accessed (for example: modified, read, or copied) in more than 120 days.

File Space TiB Used TiB Capacity % Used
/glade/u/home 94 150 63%
/glade/u/apps 2 10 19%
/glade/work 1,324 4,096 33%
/glade/derecho/scratch 28,038 55,814 51%
/glade/campaign 113,168 125,240 91%
Updated 9:00 pm MST Mon Dec 2 2024