System Status

The tables below are updated every few minutes to show the current status of our supercomputing, data analysis, visualization resources, and other specialized nodes and systems. If the tables indicate a system is unavailable, check your email for timely updates via our Notifier service. Scheduled maintenance downtimes are announced in advance in the Daily Bulletin as well as in Notifier emails.

Derecho node status and queue activity

Login Node Status Users Load
derecho1 UP 74 0.20 %
derecho2 UP 69 2.85 %
derecho3 UP 89 2.02 %
derecho4 UP 98 2.02 %
derecho5 UP 54 1.13 %
derecho6 UP 68 3.19 %
derecho7 UP - 0.95 %
derecho8 UP - 0.00 %
CPU Nodes
Reserved 2 ( 0.1 %)
Offline 61 ( 2.5 %)
Running Jobs 1408 ( 56.6 %)
Free 1017 ( 40.9 %)
GPU Nodes
Offline 1 ( 1.2 %)
Running Jobs 30 ( 36.6 %)
Free 51 ( 62.2 %)
Updated 1:05 am MST Sat Feb 24 2024
Queue Jobs Running Jobs Queued Jobs Held Nodes Used Users
cpu 188 70 340 1263 72
gpu 7 0 0 14 3
system 0 0 0 0 -
hybrid 0 0 0 0 -
pcpu 8 0 7 146 2
pgpu 2 0 0 16 1
gpudev 0 0 0 0 -
cpudev 8 2 0 8 1
repair 0 0 0 0 -
jhub 0 0 0 0 -
S2855391 0 0 0 0 -
R3011531 0 0 0 0 -
M3198918 0 0 0 0 -
Updated 1:05 am MST Sat Feb 24 2024

Casper node status and queue activity

Login Node Status Users Load
casper-login1 UP 96 18.5 %
casper-login2 UP 120 16.0 %
HTC Nodes
312 of 2340 CPUs in use ( 13.3 %)
Partially Allocated 27 ( 41.5 %)
Fully Allocated 3 ( 4.6 %)
Offline 2 ( 3.1 %)
Free 33 ( 50.8 %)
Large Memory Nodes
Partially Allocated 1 ( 50.0 %)
Free 1 ( 50.0 %)
GP100 Visualization Nodes
3 of 48 GPU sessions in use ( 6.2 %)
Partially Allocated 1 ( 12.5 %)
Free 7 ( 87.5 %)
V100 GPU Nodes
16 of 64 GPUs in use ( 25.0 %)
Partially Allocated 4 ( 40.0 %)
Free 6 ( 60.0 %)
A100 GPU Nodes
0 of 35 GPUs in use ( 0.0 %)
Free 11 (100.0 %)
RDA Nodes
Partially Allocated 2 ( 50.0 %)
Fully Allocated 2 ( 50.0 %)
JupyterHub Login Nodes
Partially Allocated 3 ( 42.9 %)
Fully Allocated 4 ( 57.1 %)
Updated 1:05 am MST Sat Feb 24 2024
Queue Jobs Running Jobs Queued Jobs Held CPUs Used Users
htc 86 0 22 312 40
vis 3 0 0 18 3
largemem 1 0 0 1 1
gpgpu 8 0 0 35 7
rda 50 0 0 86 3
tdd 0 0 0 0 -
jhublogin 240 0 0 240 240
system 0 0 0 0 -
S9227431 0 0 0 0 -
Updated 1:05 am MST Sat Feb 24 2024

V100 node status

V100 Node State # GPUs # Used # Avail
casper36 full 4 4 0
casper08 full 8 8 0
casper29 free 4 1 3
casper30 free 8 0 8
casper31 free 8 0 8
casper09 free 4 3 1
casper24 free 8 0 8
casper25 free 4 0 4
casper28 free 8 0 8
casper27 free 8 0 8
Updated 1:05 am MST Sat Feb 24 2024
A100 Node State # GPUs # Used # Avail
casper18 free 1 0 1
casper19 free 1 0 1
casper21 free 1 0 1
casper38 free 4 0 4
casper39 free 4 0 4
casper40 free 4 0 4
casper41 free 4 0 4
casper42 free 4 0 4
casper43 free 4 0 4
casper44 free 4 0 4
casper37 free 4 0 4
Updated 1:05 am MST Sat Feb 24 2024

GLADE file spaces and Campaign Storage

Individual files are removed from the /glade/scratch space automatically if they have not been accessed (for example: modified, read, or copied) in more than 120 days.

File Space TiB Used TiB Capacity % Used
/glade/u/home 79 150 53%
/glade/u/apps 3 10 22%
/glade/work 1,032 4,096 26%
/glade/derecho/scratch 19,612 55,814 36%
/glade/cheyenne/scratch 1 1 100%
/glade/campaign 103,939 123,593 85%
GLADE 18,818 28,598 66%
Updated 1:00 am MST Sat Feb 24 2024