System Status

The tables below are updated every few minutes to show the current status of our supercomputing, data analysis, visualization resources, and other specialized nodes and systems. If the tables indicate a system is unavailable, check your email for timely updates via our Notifier service. Scheduled maintenance downtimes are announced in advance in the Daily Bulletin as well as in Notifier emails.

Derecho node status and queue activity

Login Node Status Users Load
derecho1 UP 67 2.95 %
derecho2 UP 84 0.22 %
derecho3 UP 56 7.39 %
derecho4 UP 39 1.05 %
derecho5 UP 53 1.55 %
derecho6 UP 58 8.84 %
derecho7 UP 61 4.64 %
derecho8 UP - 0.02 %
CPU Nodes
Offline 18 ( 0.7 %)
Running Jobs 1843 ( 74.1 %)
Free 627 ( 25.2 %)
GPU Nodes
Offline 2 ( 2.4 %)
Running Jobs 80 ( 97.6 %)
Updated 12:30 am MST Tue Jan 21 2025
Queue Jobs Running Jobs Queued Jobs Held Nodes Used Users
cpu 224 0 100 1762 83
gpu 17 10 10 80 6
system 0 0 0 0 -
hybrid 0 0 0 0 -
pcpu 2 0 1 80 1
pgpu 0 0 0 0 -
gpudev 0 0 0 0 -
cpudev 4 0 18 5 13
repair 0 0 0 0 -
jhub 0 0 0 0 -
M7491254 0 0 0 0 -
Updated 12:30 am MST Tue Jan 21 2025

Casper node status and queue activity

Login Node Status Users Load
casper-login1 UP 90 9.2 %
casper-login2 UP 78 71.7 %
HTC Nodes
326 of 2232 CPUs in use ( 14.6 %)
Partially Allocated 23 ( 37.1 %)
Fully Allocated 2 ( 3.2 %)
Free 37 ( 59.7 %)
Large Memory Nodes
Partially Allocated 5 ( 62.5 %)
Free 3 ( 37.5 %)
GP100 Visualization Nodes
4 of 48 GPU sessions in use ( 8.3 %)
Partially Allocated 1 ( 12.5 %)
Offline 1 ( 12.5 %)
Free 6 ( 75.0 %)
L40 Visualization Nodes
1 of 36 GPU sessions in use ( 2.8 %)
Partially Allocated 1 ( 16.7 %)
Free 5 ( 83.3 %)
V100 GPU Nodes
9 of 56 GPUs in use ( 16.1 %)
Partially Allocated 2 ( 22.2 %)
Offline 1 ( 11.1 %)
Free 6 ( 66.7 %)
A100 GPU Nodes
2 of 34 GPUs in use ( 5.9 %)
Partially Allocated 2 ( 20.0 %)
Free 8 ( 80.0 %)
H100 GPU Nodes
0 of 8 GPUs in use ( 0.0 %)
Free 2 (100.0 %)
RDA Nodes
Partially Allocated 1 ( 20.0 %)
Free 4 ( 80.0 %)
JupyterHub Login Nodes
Partially Allocated 3 ( 33.3 %)
Fully Allocated 5 ( 55.6 %)
Free 1 ( 11.1 %)
Updated 12:30 am MST Tue Jan 21 2025
Queue Jobs Running Jobs Queued Jobs Held CPUs Used Users
htc 116 168 17 326 33
vis 4 0 0 4 4
largemem 5 0 0 9 3
gpgpu 2 0 0 9 2
rda 28 0 0 43 4
tdd 0 0 0 0 -
jhublogin 268 0 0 536 268
system 0 0 2 0 1
h100 0 0 0 0 -
l40 1 0 0 33 1
S3011784 0 0 0 0 -
M3532675 0 0 0 0 -
Updated 12:30 am MST Tue Jan 21 2025

V100 node status

V100 Node State # GPUs # Used # Avail
casper36 free 4 1 3
casper08 full 8 8 0
casper29 free 4 0 4
casper30 free 8 0 8
casper31 free 8 0 8
casper09 down 4 0 4
casper24 free 8 0 8
casper25 free 4 0 4
casper28 free 8 0 8
Updated 12:30 am MST Tue Jan 21 2025
A100 Node State # GPUs # Used # Avail
casper18 full 1 1 0
casper21 free 1 0 1
casper38 free 4 1 3
casper39 free 4 0 4
casper40 free 4 0 4
casper41 free 4 0 4
casper42 free 4 0 4
casper43 free 4 0 4
casper44 free 4 0 4
casper37 free 4 0 4
Updated 12:30 am MST Tue Jan 21 2025
H100 Node State # GPUs # Used # Avail
casper57 free 4 0 4
casper58 free 4 0 4
Updated 12:30 am MST Tue Jan 21 2025

GLADE file spaces and Campaign Storage

Individual files are removed from the /glade/scratch space automatically if they have not been accessed (for example: modified, read, or copied) in more than 120 days.

File Space TiB Used TiB Capacity % Used
/glade/u/home 93 150 62%
/glade/u/apps 3 10 23%
/glade/work 1,340 4,096 33%
/glade/derecho/scratch 28,166 55,814 51%
/glade/campaign 117,350 125,240 94%
Updated 12:00 am MST Tue Jan 21 2025