System Status

The tables below are updated every few minutes to show the current status of our supercomputing, data analysis, visualization resources, and other specialized nodes and systems. If the tables indicate a system is unavailable, check your email for timely updates via our Notifier service. Scheduled maintenance downtimes are announced in advance in the Daily Bulletin as well as in Notifier emails.

Derecho node status and queue activity

Login Node Status Users Load
derecho1 UP 70 6.45 %
derecho2 UP 53 2.22 %
derecho3 UP 80 7.84 %
derecho4 UP 71 5.55 %
derecho5 UP 89 3.99 %
derecho6 UP 62 1.26 %
derecho7 UP 60 2.43 %
derecho8 UP - 0.00 %
CPU Nodes
Offline 8 ( 0.3 %)
Running Jobs 2473 ( 99.4 %)
Free 7 ( 0.3 %)
GPU Nodes
Offline 4 ( 4.9 %)
Running Jobs 77 ( 93.9 %)
Free 1 ( 1.2 %)
Updated 11:40 pm MST Wed Nov 20 2024
Queue Jobs Running Jobs Queued Jobs Held Nodes Used Users
cpu 524 184 151 2472 110
gpu 14 14 87 77 8
system 0 0 0 0 -
hybrid 0 0 0 0 -
pcpu 0 2 0 0 1
pgpu 0 0 0 0 -
gpudev 0 0 0 0 -
cpudev 5 213 308 5 11
repair 0 0 0 0 -
jhub 0 0 0 0 -
Updated 11:40 pm MST Wed Nov 20 2024

Casper node status and queue activity

Login Node Status Users Load
casper-login1 UP 66 6.8 %
casper-login2 UP 158 47.4 %
HTC Nodes
365 of 2232 CPUs in use ( 16.4 %)
Partially Allocated 37 ( 59.7 %)
Fully Allocated 1 ( 1.6 %)
Free 24 ( 38.7 %)
Large Memory Nodes
Partially Allocated 7 ( 87.5 %)
Free 1 ( 12.5 %)
GP100 Visualization Nodes
4 of 48 GPU sessions in use ( 8.3 %)
Partially Allocated 1 ( 12.5 %)
Free 7 ( 87.5 %)
L40 Visualization Nodes
2 of 36 GPU sessions in use ( 5.6 %)
Partially Allocated 1 ( 16.7 %)
Free 5 ( 83.3 %)
V100 GPU Nodes
15 of 56 GPUs in use ( 26.8 %)
Partially Allocated 5 ( 55.6 %)
Offline 1 ( 11.1 %)
Free 3 ( 33.3 %)
A100 GPU Nodes
18 of 35 GPUs in use ( 51.4 %)
Partially Allocated 7 ( 63.6 %)
Offline 1 ( 9.1 %)
Free 3 ( 27.3 %)
H100 GPU Nodes
0 of 8 GPUs in use ( 0.0 %)
Offline 2 (100.0 %)
RDA Nodes
Partially Allocated 1 ( 20.0 %)
Free 4 ( 80.0 %)
JupyterHub Login Nodes
Partially Allocated 5 ( 55.6 %)
Fully Allocated 4 ( 44.4 %)
Updated 11:40 pm MST Wed Nov 20 2024
Queue Jobs Running Jobs Queued Jobs Held CPUs Used Users
htc 299 0 23 365 51
vis 4 0 0 11 4
largemem 7 0 0 27 5
gpgpu 5 0 0 28 3
rda 5 0 0 23 4
tdd 0 0 0 0 -
jhublogin 310 0 0 578 310
system 0 0 2 0 1
h100 0 0 0 0 -
l40 2 0 0 40 1
S3011784 0 0 0 0 -
Updated 11:40 pm MST Wed Nov 20 2024

V100 node status

V100 Node State # GPUs # Used # Avail
casper36 full 4 4 0
casper08 free 8 1 7
casper29 free 4 1 3
casper30 free 8 1 7
casper31 full 8 8 0
casper09 offline 4 0 4
casper24 free 8 0 8
casper25 free 4 0 4
casper28 free 8 0 8
Updated 11:40 pm MST Wed Nov 20 2024
A100 Node State # GPUs # Used # Avail
casper18 free 1 0 1
casper19 offline 1 0 1
casper21 free 1 0 1
casper38 free 4 1 3
casper39 free 4 3 1
casper40 free 4 2 2
casper41 full 4 4 0
casper42 free 4 3 1
casper43 free 4 1 3
casper44 free 4 0 4
casper37 full 4 4 0
Updated 11:40 pm MST Wed Nov 20 2024
H100 Node State # GPUs # Used # Avail
casper57 offline 4 0 4
casper58 offline 4 0 4
Updated 11:40 pm MST Wed Nov 20 2024

GLADE file spaces and Campaign Storage

Individual files are removed from the /glade/scratch space automatically if they have not been accessed (for example: modified, read, or copied) in more than 120 days.

File Space TiB Used TiB Capacity % Used
/glade/u/home 93 150 62%
/glade/u/apps 2 10 19%
/glade/work 1,300 4,096 32%
/glade/derecho/scratch 27,316 55,814 50%
/glade/campaign 113,786 125,240 91%
Updated 11:00 pm MST Wed Nov 20 2024