System Status

The tables below are updated every few minutes to show the current status of our supercomputing, data analysis, visualization resources, and other specialized nodes and systems. If the tables indicate a system is unavailable, check your email for timely updates via our Notifier service. Scheduled maintenance downtimes are announced in advance in the Daily Bulletin as well as in Notifier emails.

Derecho node status and queue activity

Login Node Status Users Load
derecho1 UP 110 9.43 %
derecho2 UP 84 2.23 %
derecho3 UP 88 11.44 %
derecho4 UP 129 17.86 %
derecho5 UP 82 1.02 %
derecho6 UP 75 4.65 %
derecho7 UP 87 4.41 %
derecho8 UP - 0.02 %
CPU Nodes
Reserved 8 ( 0.3 %)
Offline 12 ( 0.5 %)
Running Jobs 2444 ( 98.2 %)
Free 24 ( 1.0 %)
GPU Nodes
Running Jobs 54 ( 65.9 %)
Free 28 ( 34.1 %)
Updated 11:00 am MST Fri Feb 21 2025
Queue Jobs Running Jobs Queued Jobs Held Nodes Used Users
cpu 159 403 244 2444 136
gpu 19 1 18 53 14
system 0 0 0 32 -
hybrid 0 0 0 0 -
pcpu 0 0 0 0 -
pgpu 0 0 0 0 -
gpudev 2 0 0 2 2
cpudev 11 0 25 11 24
repair 0 0 0 0 -
jhub 0 0 0 0 -
R7808487 1 0 0 6 1
Updated 11:00 am MST Fri Feb 21 2025

Casper node status and queue activity

Login Node Status Users Load
casper-login1 UP 125 13.1 %
casper-login2 UP 129 22.3 %
HTC Nodes
439 of 2232 CPUs in use ( 19.7 %)
Partially Allocated 43 ( 69.4 %)
Offline 1 ( 1.6 %)
Free 18 ( 29.0 %)
Large Memory Nodes
Partially Allocated 5 ( 62.5 %)
Free 3 ( 37.5 %)
GP100 Visualization Nodes
5 of 48 GPU sessions in use ( 10.4 %)
Partially Allocated 2 ( 25.0 %)
Free 6 ( 75.0 %)
L40 Visualization Nodes
1 of 36 GPU sessions in use ( 2.8 %)
Partially Allocated 1 ( 16.7 %)
Free 5 ( 83.3 %)
V100 GPU Nodes
19 of 56 GPUs in use ( 33.9 %)
Partially Allocated 5 ( 55.6 %)
Free 4 ( 44.4 %)
A100 GPU Nodes
17 of 34 GPUs in use ( 50.0 %)
Partially Allocated 7 ( 70.0 %)
Offline 2 ( 20.0 %)
Free 1 ( 10.0 %)
H100 GPU Nodes
0 of 8 GPUs in use ( 0.0 %)
Free 2 (100.0 %)
RDA Nodes
Partially Allocated 4 ( 80.0 %)
Free 1 ( 20.0 %)
JupyterHub Login Nodes
Partially Allocated 3 ( 33.3 %)
Fully Allocated 6 ( 66.7 %)
Updated 11:00 am MST Fri Feb 21 2025
Queue Jobs Running Jobs Queued Jobs Held CPUs Used Users
htc 213 2 87 439 114
vis 5 0 0 38 5
largemem 10 1 0 16 8
gpgpu 15 0 0 15 3
rda 41 0 0 124 5
tdd 0 0 0 0 -
jhublogin 307 0 0 575 307
system 0 0 1 0 1
h100 0 0 0 0 -
l40 1 0 0 4 1
S3011784 0 0 0 0 -
Updated 11:00 am MST Fri Feb 21 2025

V100 node status

V100 Node State # GPUs # Used # Avail
casper36 full 4 4 0
casper08 free 8 5 3
casper29 free 4 3 1
casper30 free 8 0 8
casper31 free 8 0 8
casper09 free 4 3 1
casper24 free 8 0 8
casper25 full 4 4 0
casper28 free 8 0 8
Updated 11:00 am MST Fri Feb 21 2025
A100 Node State # GPUs # Used # Avail
casper18 free 1 0 1
casper21 full 1 1 0
casper38 full 4 4 0
casper39 offline 4 0 4
casper40 free 4 3 1
casper41 free 4 2 2
casper42 free 4 2 2
casper43 free 4 1 3
casper44 offline 4 0 4
casper37 full 4 4 0
Updated 11:00 am MST Fri Feb 21 2025
H100 Node State # GPUs # Used # Avail
casper57 free 4 0 4
casper58 free 4 0 4
Updated 11:00 am MST Fri Feb 21 2025

GLADE file spaces and Campaign Storage

Individual files are removed from the /glade/scratch space automatically if they have not been accessed (for example: modified, read, or copied) in more than 120 days.

File Space TiB Used TiB Capacity % Used
/glade/u/home 94 150 63%
/glade/u/apps 2 10 17%
/glade/work 1,360 4,096 34%
/glade/derecho/scratch 24,025 55,814 44%
/glade/campaign 116,202 125,240 93%
Updated 11:00 am MST Fri Feb 21 2025