System Status

The tables below are updated every few minutes to show the current status of our supercomputing, data analysis, visualization resources, and other specialized nodes and systems. If the tables indicate a system is unavailable, check your email for timely updates via our Notifier service. Scheduled maintenance downtimes are announced in advance in the Daily Bulletin as well as in Notifier emails.

Derecho node status and queue activity

Login Node Status Users Load
derecho1 UP 111 6.73 %
derecho2 UP 74 3.93 %
derecho3 UP 79 6.58 %
derecho4 UP 67 7.73 %
derecho5 UP 84 3.87 %
derecho6 UP 72 3.59 %
derecho7 UP 59 3.65 %
derecho8 UP 0 1.03 %
CPU Nodes
Offline 12 ( 0.5 %)
Running Jobs 1773 ( 71.3 %)
Free 703 ( 28.3 %)
GPU Nodes
Running Jobs 80 ( 97.6 %)
Free 2 ( 2.4 %)
Updated 8:30 pm MDT Wed Mar 26 2025
Queue Jobs Running Jobs Queued Jobs Held Nodes Used Users
cpu 235 0 119 1773 117
gpu 69 14 5 80 13
system 0 0 0 0 -
hybrid 0 0 0 0 -
pcpu 0 0 0 0 -
pgpu 0 0 0 0 -
gpudev 2 0 0 2 2
cpudev 3 0 23 3 14
repair 0 0 0 0 -
jhub 0 0 0 0 -
M8873043 0 0 0 0 -
Updated 8:30 pm MDT Wed Mar 26 2025

Casper node status and queue activity

Login Node Status Users Load
casper-login1 UP 117 14.7 %
casper-login2 UP 112 27.5 %
HTC Nodes
823 of 6200 CPUs in use ( 13.3 %)
Partially Allocated 50 ( 39.7 %)
Offline 1 ( 0.8 %)
Free 75 ( 59.5 %)
Large Memory Nodes
Partially Allocated 7 ( 87.5 %)
Down 1 ( 12.5 %)
GP100 Visualization Nodes
4 of 48 GPU sessions in use ( 8.3 %)
Partially Allocated 2 ( 25.0 %)
Free 6 ( 75.0 %)
L40 Visualization Nodes
0 of 36 GPU sessions in use ( 0.0 %)
Free 6 (100.0 %)
V100 GPU Nodes
12 of 52 GPUs in use ( 23.1 %)
Partially Allocated 3 ( 37.5 %)
Free 5 ( 62.5 %)
A100 GPU Nodes
12 of 34 GPUs in use ( 35.3 %)
Partially Allocated 1 ( 10.0 %)
Fully Allocated 5 ( 50.0 %)
Offline 2 ( 20.0 %)
Free 2 ( 20.0 %)
H100 GPU Nodes
4 of 8 GPUs in use ( 50.0 %)
Partially Allocated 1 ( 50.0 %)
Free 1 ( 50.0 %)
RDA Nodes
Partially Allocated 2 ( 40.0 %)
Free 3 ( 60.0 %)
JupyterHub Login Nodes
Partially Allocated 3 ( 33.3 %)
Fully Allocated 6 ( 66.7 %)
Updated 8:30 pm MDT Wed Mar 26 2025
Queue Jobs Running Jobs Queued Jobs Held CPUs Used Users
htc 589 20 0 823 70
vis 4 1 0 21 5
largemem 10 2 0 32 7
gpgpu 6 0 0 19 5
rda 8 0 0 41 2
tdd 0 0 0 0 -
jhublogin 227 0 0 495 227
system 0 0 0 0 -
h100 1 0 0 32 1
l40 0 0 0 0 -
S3011784 0 0 0 0 -
Updated 8:30 pm MDT Wed Mar 26 2025

V100 node status

V100 Node State # GPUs # Used # Avail
casper36 free 4 2 2
casper08 full 8 8 0
casper29 free 4 2 2
casper30 free 8 0 8
casper31 free 8 0 8
casper24 free 8 0 8
casper25 free 4 0 4
casper28 free 8 0 8
Updated 8:30 pm MDT Wed Mar 26 2025
A100 Node State # GPUs # Used # Avail
casper18 free 1 0 1
casper21 free 1 0 1
casper38 job-busy 4 2 2
casper39 offline 4 0 4
casper40 job-busy 4 2 2
casper41 free 4 2 2
casper42 job-busy 4 2 2
casper43 job-busy 4 2 2
casper44 offline 4 0 4
casper37 job-busy 4 2 2
Updated 8:30 pm MDT Wed Mar 26 2025
H100 Node State # GPUs # Used # Avail
casper57 full 4 4 0
casper58 free 4 0 4
Updated 8:30 pm MDT Wed Mar 26 2025

GLADE file spaces and Campaign Storage

Individual files are removed from the /glade/scratch space automatically if they have not been accessed (for example: modified, read, or copied) in more than 120 days.

File Space TiB Used TiB Capacity % Used
/glade/u/home 95 150 63%
/glade/u/apps 2 10 17%
/glade/work 1,371 4,096 34%
/glade/derecho/scratch 24,372 55,814 45%
/glade/campaign 117,161 125,240 94%
Updated 8:00 pm MDT Wed Mar 26 2025