System Status

The tables below are updated every few minutes to show the current status of our supercomputing, data analysis, visualization resources, and other specialized nodes and systems. If the tables indicate a system is unavailable, check your email for timely updates via our Notifier service. Scheduled maintenance downtimes are announced in advance in the Daily Bulletin as well as in Notifier emails.

Derecho node status and queue activity

Last updated: 2:10 am MST Fri Dec 5 2025 Up to date

Login Node Status Users Load
derecho1 UP 50 5.52 %
derecho2 UP 58 1.15 %
derecho3 UP 67 6.23 %
derecho4 UP 47 3.81 %
derecho5 UP 42 0.59 %
derecho6 UP 43 4.80 %
derecho7 UP 68 3.31 %
derecho8 UP 0 0.01 %
CPU Nodes
Offline 12 ( 0.5 %)
Running Jobs 2391 ( 96.1 %)
Free 85 ( 3.4 %)
GPU Nodes
Running Jobs 37 ( 45.1 %)
Free 45 ( 54.9 %)
Updated 2:10 am MST Fri Dec 5 2025

Last updated: 2:10 am MST Fri Dec 5 2025 Up to date

Queue Jobs Running Jobs Queued Jobs Held Nodes Used Users
cpu 255 430 300 2387 -
gpu 13 1 22 37 -
system 0 0 0 0 -
hybrid 0 0 0 0 -
pcpu 0 0 0 0 -
pgpu 0 0 0 0 -
gpudev 0 0 0 0 -
cpudev 94 0 104 97 -
repair 0 0 0 0 -
Updated 2:10 am MST Fri Dec 5 2025

Casper node status and queue activity

Last updated: 2:10 am MST Fri Dec 5 2025 Up to date

Login Node Status Users Load
casper-login1 UP 98 20.4 %
casper-login2 UP 80 41.3 %
HTC Nodes
461 of 5812 CPUs in use ( 7.9 %)
Partially Allocated 35 ( 29.7 %)
Free 83 ( 70.3 %)
Large Memory Nodes
Partially Allocated 3 ( 37.5 %)
Free 5 ( 62.5 %)
GP100 Visualization Nodes
2 of 54 GPU sessions in use ( 3.7 %)
Partially Allocated 2 ( 22.2 %)
Free 7 ( 77.8 %)
L40 Visualization Nodes
2 of 42 GPU sessions in use ( 4.8 %)
Partially Allocated 1 ( 16.7 %)
Free 5 ( 83.3 %)
V100 GPU Nodes
1 of 52 GPUs in use ( 1.9 %)
Partially Allocated 1 ( 12.5 %)
Free 7 ( 87.5 %)
A100 GPU Nodes
13 of 34 GPUs in use ( 38.2 %)
Partially Allocated 3 ( 30.0 %)
Fully Allocated 1 ( 10.0 %)
Offline 1 ( 10.0 %)
Free 5 ( 50.0 %)
H100 GPU Nodes
5 of 8 GPUs in use ( 62.5 %)
Partially Allocated 1 ( 50.0 %)
Fully Allocated 1 ( 50.0 %)
RDA Nodes
Partially Allocated 2 ( 25.0 %)
Free 6 ( 75.0 %)
JupyterHub Login Nodes
Partially Allocated 5 ( 29.4 %)
Fully Allocated 12 ( 70.6 %)
Updated 2:10 am MST Fri Dec 5 2025

Last updated: 2:10 am MST Fri Dec 5 2025 Up to date

Queue Jobs Running Jobs Queued Jobs Held CPUs Used Users
htc 137 1 377 461 -
vis 4 0 0 4 -
largemem 3 0 0 75 -
rda 2 0 0 12 -
jhublogin 288 0 0 844 -
system 0 0 0 0 -
gdex 25 0 0 101 -
nvgpu 5 0 0 221 -
amdgpu 1 0 0 32 -
R416808 0 0 0 0 -
Updated 2:10 am MST Fri Dec 5 2025

V100 node status

Last updated: 2:10 am MST Fri Dec 5 2025 Up to date

V100 Node State # GPUs # Used # Avail
casper30 free 8 1 7
casper31 free 8 0 8
casper24 free 8 0 8
casper28 free 8 0 8
casper36 free 4 0 4
casper08 free 8 0 8
casper29 free 4 0 4
casper25 free 4 0 4
Updated 2:10 am MST Fri Dec 5 2025
A100 Node State # GPUs # Used # Avail
casper38 free 4 0 4
casper42 free 4 0 4
casper37 offline 4 0 4
casper39 free 4 0 4
casper40 full 4 4 0
casper43 free 4 0 4
casper18 full 1 1 0
casper21 free 1 0 1
casper41 full 4 4 0
casper44 full 4 4 0
Updated 2:10 am MST Fri Dec 5 2025
H100 Node State # GPUs # Used # Avail
casper57 free 4 1 3
casper58 full 4 4 0
Updated 2:10 am MST Fri Dec 5 2025

GLADE file spaces and Campaign Storage

Individual files are removed from the /glade/scratch space automatically if they have not been accessed (for example: modified, read, or copied) in more than 120 days.

Last updated: 2:00 am MST Fri Dec 5 2025 Up to date

File Space TiB Used TiB Capacity % Used
/glade/u/home 106 150 71%
/glade/u/apps 3 10 24%
/glade/work 1,546 4,096 38%
/glade/derecho/scratch 30,160 55,814 55%
/glade/campaign 125,119 135,030 93%
Updated 2:00 am MST Fri Dec 5 2025