System Status

The tables below are updated every few minutes to show the current status of our supercomputing, data analysis, visualization resources, and other specialized nodes and systems. If the tables indicate a system is unavailable, check your email for timely updates via our Notifier service. Scheduled maintenance downtimes are announced in advance in the Daily Bulletin as well as in Notifier emails.

Derecho node status and queue activity

Login Node Status Users Load
derecho1 UP 89 13.68 %
derecho2 UP 86 7.52 %
derecho3 UP 64 2.80 %
derecho4 UP 85 4.64 %
derecho5 UP 64 5.61 %
derecho6 UP 89 6.50 %
derecho7 UP 70 3.30 %
derecho8 UP 3 0.19 %
CPU Nodes
Offline 8 ( 0.3 %)
Running Jobs 2461 ( 98.9 %)
Free 19 ( 0.8 %)
GPU Nodes
Offline 5 ( 6.1 %)
Running Jobs 58 ( 70.7 %)
Free 19 ( 23.2 %)
Updated 7:00 pm MDT Tue Jul 15 2025
Queue Jobs Running Jobs Queued Jobs Held Nodes Used Users
cpu 426 89 151 2461 135
gpu 25 4 7 58 13
system 0 0 0 0 -
hybrid 0 0 0 0 -
pcpu 0 0 0 0 -
pgpu 0 0 0 0 -
gpudev 1 0 1 1 2
cpudev 5 1 65 5 20
repair 0 0 0 0 -
jhub 0 0 0 0 -
M9921347 0 0 0 0 -
Updated 7:00 pm MDT Tue Jul 15 2025

Casper node status and queue activity

Login Node Status Users Load
casper-login1 UP 150 25.2 %
casper-login2 UP - 0.0 %
HTC Nodes
1584 of 6320 CPUs in use ( 25.1 %)
Partially Allocated 92 ( 73.0 %)
Fully Allocated 3 ( 2.4 %)
Offline 1 ( 0.8 %)
Free 30 ( 23.8 %)
Large Memory Nodes
Partially Allocated 1 ( 12.5 %)
Free 7 ( 87.5 %)
GP100 Visualization Nodes
11 of 48 GPU sessions in use ( 22.9 %)
Partially Allocated 2 ( 25.0 %)
Free 6 ( 75.0 %)
L40 Visualization Nodes
0 of 42 GPU sessions in use ( 0.0 %)
Free 6 (100.0 %)
V100 GPU Nodes
28 of 52 GPUs in use ( 53.8 %)
Partially Allocated 7 ( 87.5 %)
Free 1 ( 12.5 %)
A100 GPU Nodes
5 of 34 GPUs in use ( 14.7 %)
Partially Allocated 3 ( 30.0 %)
Offline 2 ( 20.0 %)
Free 5 ( 50.0 %)
H100 GPU Nodes
6 of 8 GPUs in use ( 75.0 %)
Partially Allocated 2 (100.0 %)
RDA Nodes
Partially Allocated 2 ( 22.2 %)
Free 7 ( 77.8 %)
JupyterHub Login Nodes
Partially Allocated 3 ( 33.3 %)
Fully Allocated 6 ( 66.7 %)
Updated 7:00 pm MDT Tue Jul 15 2025
Queue Jobs Running Jobs Queued Jobs Held CPUs Used Users
htc 827 240 154 1584 132
vis 11 0 0 39 10
largemem 1 0 0 2 1
gpgpu 5 3 0 35 5
rda 19 1 1 63 5
tdd 1 0 0 8 1
jhublogin 317 0 0 585 317
system 1 0 0 96 1
h100 6 0 12 6 1
l40 0 0 0 0 -
S5317010 0 0 0 0 -
M5505230 0 0 0 0 -
rdagen2 0 1 0 0 1
Updated 7:00 pm MDT Tue Jul 15 2025

V100 node status

V100 Node State # GPUs # Used # Avail
casper30 free 8 4 4
casper31 free 8 4 4
casper24 free 8 4 4
casper28 free 8 4 4
casper36 free 4 3 1
casper08 full 8 8 0
casper29 free 4 1 3
casper25 free 4 0 4
Updated 7:00 pm MDT Tue Jul 15 2025
A100 Node State # GPUs # Used # Avail
casper38 free 4 1 3
casper42 free 4 0 4
casper37 free 4 2 2
casper39 free 4 2 2
casper40 down 4 0 4
casper41 free 4 0 4
casper43 down 4 0 4
casper44 free 4 0 4
casper18 free 1 0 1
casper21 free 1 0 1
Updated 7:00 pm MDT Tue Jul 15 2025
H100 Node State # GPUs # Used # Avail
casper57 full 4 4 0
casper58 free 4 2 2
Updated 7:00 pm MDT Tue Jul 15 2025

GLADE file spaces and Campaign Storage

Individual files are removed from the /glade/scratch space automatically if they have not been accessed (for example: modified, read, or copied) in more than 120 days.

File Space TiB Used TiB Capacity % Used
/glade/u/home 100 150 67%
/glade/u/apps 2 10 19%
/glade/work 1,442 4,096 36%
/glade/derecho/scratch 27,401 55,814 50%
/glade/campaign 117,722 125,240 94%
Updated 7:00 pm MDT Tue Jul 15 2025