System Status

The tables below are updated every few minutes to show the current status of our supercomputing, data analysis, visualization resources, and other specialized nodes and systems. If the tables indicate a system is unavailable, check your email for timely updates via our Notifier service. Scheduled maintenance downtimes are announced in advance in the Daily Bulletin as well as in Notifier emails.

Derecho node status and queue activity

Last updated: 4:30 am MST Sat Feb 7 2026 Up to date

Login Node Status Users Load
derecho1 UP 47 5.85 %
derecho2 UP 36 2.02 %
derecho3 UP 29 0.38 %
derecho4 UP 29 2.15 %
derecho5 UP 52 1.48 %
derecho6 UP 42 1.65 %
derecho7 UP 48 2.30 %
derecho8 UP 0 0.98 %
CPU Nodes
Running Jobs 2456 ( 98.7 %)
Free 32 ( 1.3 %)
GPU Nodes
Offline 1 ( 1.2 %)
Running Jobs 25 ( 30.5 %)
Free 56 ( 68.3 %)
Updated 4:30 am MST Sat Feb 7 2026

Last updated: 4:30 am MST Sat Feb 7 2026 Up to date

Queue Jobs Running Jobs Queued Jobs Held Nodes Used Users
cpu 229 257 565 2477 -
gpu 19 1 0 25 -
system 0 0 0 0 -
hybrid 0 0 0 0 -
pcpu 0 0 0 0 -
pgpu 0 0 0 0 -
gpudev 0 0 0 0 -
cpudev 4 3 28 4 -
repair 0 0 0 0 -
Updated 4:30 am MST Sat Feb 7 2026

Casper node status and queue activity

Last updated: 4:25 am MST Sat Feb 7 2026 Up to date

Login Node Status Users Load
casper-login1 UP - 0.0 %
casper-login2 UP - 0.0 %
HTC Nodes
453 of 5812 CPUs in use ( 7.8 %)
Partially Allocated 100 ( 84.7 %)
Free 18 ( 15.3 %)
Large Memory Nodes
Partially Allocated 1 ( 12.5 %)
Free 7 ( 87.5 %)
GP100 Visualization Nodes
0 of 54 GPU sessions in use ( 0.0 %)
Free 9 (100.0 %)
L40 Visualization Nodes
1 of 42 GPU sessions in use ( 2.4 %)
Partially Allocated 1 ( 16.7 %)
Free 5 ( 83.3 %)
V100 GPU Nodes
41 of 52 GPUs in use ( 78.8 %)
Partially Allocated 7 ( 87.5 %)
Offline 1 ( 12.5 %)
A100 GPU Nodes
13 of 34 GPUs in use ( 38.2 %)
Partially Allocated 6 ( 60.0 %)
Offline 1 ( 10.0 %)
Free 3 ( 30.0 %)
H100 GPU Nodes
6 of 8 GPUs in use ( 75.0 %)
Partially Allocated 2 (100.0 %)
RDA Nodes
Partially Allocated 3 ( 37.5 %)
Free 5 ( 62.5 %)
JupyterHub Login Nodes
Partially Allocated 7 ( 41.2 %)
Fully Allocated 8 ( 47.1 %)
Offline 2 ( 11.8 %)
Updated 4:25 am MST Sat Feb 7 2026

Last updated: 4:25 am MST Sat Feb 7 2026 Up to date

Queue Jobs Running Jobs Queued Jobs Held CPUs Used Users
htc 260 1991 18 453 -
vis 1 0 0 18 -
largemem 1 0 1 14 -
rda 0 0 0 0 -
jhublogin 224 0 0 448 -
system 0 0 0 0 -
gdex 51 0 0 156 -
nvgpu 38 2 0 125 -
amdgpu 0 0 0 0 -
R2024160 0 0 0 0 -
Updated 4:25 am MST Sat Feb 7 2026

V100 node status

Last updated: 4:25 am MST Sat Feb 7 2026 Up to date

V100 Node State # GPUs # Used # Avail
casper30 free 8 5 3
casper31 full 8 8 0
casper24 offline 8 0 8
casper28 full 8 8 0
casper36 full 4 4 0
casper08 full 8 8 0
casper29 full 4 4 0
casper25 full 4 4 0
Updated 4:25 am MST Sat Feb 7 2026
A100 Node State # GPUs # Used # Avail
casper38 free 4 1 3
casper42 free 4 2 2
casper37 offline 4 0 4
casper39 free 4 1 3
casper40 free 4 0 4
casper43 free 4 0 4
casper18 full 1 1 0
casper21 free 1 0 1
casper41 full 4 4 0
casper44 full 4 4 0
Updated 4:25 am MST Sat Feb 7 2026
H100 Node State # GPUs # Used # Avail
casper57 full 4 4 0
casper58 free 4 2 2
Updated 4:25 am MST Sat Feb 7 2026

GLADE file spaces and Campaign Storage

Individual files are removed from the /glade/scratch space automatically if they have not been accessed (for example: modified, read, or copied) in more than 120 days.

Last updated: 4:00 am MST Sat Feb 7 2026 Up to date

File Space TiB Used TiB Capacity % Used
/glade/u/home 106 150 71%
/glade/u/apps 3 10 25%
/glade/work 1,542 4,096 38%
/glade/derecho/scratch 31,004 55,814 57%
/glade/campaign 126,211 135,030 94%
Updated 4:00 am MST Sat Feb 7 2026