System Status

The tables below are updated every few minutes to show the current status of our supercomputing, data analysis, visualization resources, and other specialized nodes and systems. If the tables indicate a system is unavailable, check your email for timely updates via our Notifier service. Scheduled maintenance downtimes are announced in advance in the Daily Bulletin as well as in Notifier emails.

Derecho node status and queue activity

Last updated: 7:45 am MST Thu Jan 22 2026 Up to date

Login Node Status Users Load
derecho1 UP 63 5.29 %
derecho2 UP 62 4.62 %
derecho3 UP 36 4.54 %
derecho4 UP 68 0.73 %
derecho5 UP 23 1.30 %
derecho6 UP 61 3.86 %
derecho7 UP 49 5.40 %
derecho8 UP - 1.01 %
CPU Nodes
Offline 22 ( 0.9 %)
Running Jobs 2460 ( 98.9 %)
Free 6 ( 0.2 %)
GPU Nodes
Running Jobs 82 (100.0 %)
Updated 7:45 am MST Thu Jan 22 2026

Last updated: 7:45 am MST Thu Jan 22 2026 Up to date

Queue Jobs Running Jobs Queued Jobs Held Nodes Used Users
cpu 271 364 548 2458 -
gpu 27 1 2 81 -
system 0 0 0 0 -
hybrid 0 0 0 0 -
pcpu 0 1 0 0 -
pgpu 0 0 0 0 -
gpudev 1 0 0 1 -
cpudev 6 0 54 6 -
repair 0 0 0 0 -
Updated 7:45 am MST Thu Jan 22 2026

Casper node status and queue activity

Last updated: 7:45 am MST Thu Jan 22 2026 Up to date

Login Node Status Users Load
casper-login1 UP - 0.0 %
casper-login2 UP - 0.0 %
HTC Nodes
528 of 5812 CPUs in use ( 9.1 %)
Partially Allocated 55 ( 46.6 %)
Down 1 ( 0.8 %)
Offline 1 ( 0.8 %)
Free 61 ( 51.7 %)
Large Memory Nodes
Partially Allocated 1 ( 12.5 %)
Free 7 ( 87.5 %)
GP100 Visualization Nodes
0 of 54 GPU sessions in use ( 0.0 %)
Down 1 ( 11.1 %)
Free 8 ( 88.9 %)
L40 Visualization Nodes
0 of 42 GPU sessions in use ( 0.0 %)
Free 6 (100.0 %)
V100 GPU Nodes
28 of 52 GPUs in use ( 53.8 %)
Partially Allocated 5 ( 62.5 %)
Down 1 ( 12.5 %)
Free 2 ( 25.0 %)
A100 GPU Nodes
10 of 34 GPUs in use ( 29.4 %)
Partially Allocated 3 ( 30.0 %)
Offline 2 ( 20.0 %)
Free 5 ( 50.0 %)
H100 GPU Nodes
0 of 8 GPUs in use ( 0.0 %)
Free 2 (100.0 %)
RDA Nodes
Partially Allocated 2 ( 25.0 %)
Free 6 ( 75.0 %)
JupyterHub Login Nodes
Partially Allocated 2 ( 11.8 %)
Fully Allocated 14 ( 82.4 %)
Offline 1 ( 5.9 %)
Updated 7:45 am MST Thu Jan 22 2026

Last updated: 7:45 am MST Thu Jan 22 2026 Up to date

Queue Jobs Running Jobs Queued Jobs Held CPUs Used Users
htc 185 2278 63 528 -
vis 0 1 0 0 -
largemem 1 1 1 1 -
rda 0 0 0 0 -
jhublogin 288 1 0 576 -
system 0 0 0 0 -
gdex 15 1 0 60 -
nvgpu 20 3 0 177 -
amdgpu 0 0 0 0 -
R1078008 0 0 0 0 -
Updated 7:45 am MST Thu Jan 22 2026

V100 node status

Last updated: 7:45 am MST Thu Jan 22 2026 Up to date

V100 Node State # GPUs # Used # Avail
casper30 full 8 8 0
casper31 free 8 0 8
casper24 full 8 8 0
casper28 free 8 7 1
casper36 free 4 1 3
casper08 free 8 0 8
casper29 full 4 4 0
casper25 down 4 0 4
Updated 7:45 am MST Thu Jan 22 2026
A100 Node State # GPUs # Used # Avail
casper38 full 4 4 0
casper42 free 4 2 2
casper37 down 4 0 4
casper39 free 4 0 4
casper40 full 4 4 0
casper43 free 4 0 4
casper18 free 1 0 1
casper21 free 1 0 1
casper41 offline 4 0 4
casper44 free 4 0 4
Updated 7:45 am MST Thu Jan 22 2026
H100 Node State # GPUs # Used # Avail
casper57 free 4 0 4
casper58 free 4 0 4
Updated 7:45 am MST Thu Jan 22 2026

GLADE file spaces and Campaign Storage

Individual files are removed from the /glade/scratch space automatically if they have not been accessed (for example: modified, read, or copied) in more than 120 days.

Last updated: 7:00 am MST Thu Jan 22 2026 Up to date

File Space TiB Used TiB Capacity % Used
/glade/u/home 107 150 72%
/glade/u/apps 3 10 25%
/glade/work 1,568 4,096 39%
/glade/derecho/scratch 30,758 55,814 56%
/glade/campaign 127,008 135,030 95%
Updated 7:00 am MST Thu Jan 22 2026