System Status

The tables below are updated every few minutes to show the current status of our supercomputing, data analysis, visualization resources, and other specialized nodes and systems. If the tables indicate a system is unavailable, check your email for timely updates via our Notifier service. Scheduled maintenance downtimes are announced in advance in the Daily Bulletin as well as in Notifier emails.

Derecho node status and queue activity

Login Node Status Users Load
derecho1 UP 66 0.19 %
derecho2 UP 57 1.13 %
derecho3 UP 65 1.42 %
derecho4 UP 57 0.36 %
derecho5 UP 73 5.42 %
derecho6 UP 66 4.23 %
derecho7 UP 74 1.42 %
derecho8 UP 0 0.00 %
CPU Nodes
Reserved 303 ( 12.2 %)
Offline 15 ( 0.6 %)
Running Jobs 1827 ( 73.4 %)
Free 343 ( 13.8 %)
GPU Nodes
Offline 4 ( 4.9 %)
Running Jobs 51 ( 62.2 %)
Free 27 ( 32.9 %)
Updated 1:55 am MDT Sat Apr 27 2024
Queue Jobs Running Jobs Queued Jobs Held Nodes Used Users
cpu 365 0 119 1826 92
gpu 8 0 1 18 6
system 0 0 0 0 -
hybrid 0 0 0 0 -
pcpu 0 0 0 0 -
pgpu 2 0 3 33 2
gpudev 0 0 0 0 -
cpudev 3 0 0 3 2
repair 0 0 0 0 -
jhub 0 0 0 0 -
S4186932 10 0 0 120 1
S4186959 5 0 0 180 1
Updated 1:55 am MDT Sat Apr 27 2024

Casper node status and queue activity

Login Node Status Users Load
casper-login1 UP 76 12.0 %
casper-login2 UP 134 30.9 %
HTC Nodes
315 of 2268 CPUs in use ( 13.9 %)
Partially Allocated 47 ( 74.6 %)
Offline 1 ( 1.6 %)
Free 15 ( 23.8 %)
Large Memory Nodes
Partially Allocated 1 ( 50.0 %)
Free 1 ( 50.0 %)
GP100 Visualization Nodes
4 of 48 GPU sessions in use ( 8.3 %)
Partially Allocated 1 ( 12.5 %)
Free 7 ( 87.5 %)
V100 GPU Nodes
6 of 64 GPUs in use ( 9.4 %)
Partially Allocated 2 ( 20.0 %)
Down 1 ( 10.0 %)
Fully Allocated 1 ( 10.0 %)
Offline 1 ( 10.0 %)
Free 5 ( 50.0 %)
A100 GPU Nodes
2 of 35 GPUs in use ( 5.7 %)
Partially Allocated 2 ( 18.2 %)
Free 9 ( 81.8 %)
RDA Nodes
Partially Allocated 2 ( 40.0 %)
Free 3 ( 60.0 %)
JupyterHub Login Nodes
Partially Allocated 7 ( 87.5 %)
Fully Allocated 1 ( 12.5 %)
Updated 1:55 am MDT Sat Apr 27 2024
Queue Jobs Running Jobs Queued Jobs Held CPUs Used Users
htc 245 9 160 310 45
vis 4 0 0 4 4
largemem 1 0 0 1 1
gpgpu 6 0 0 13 3
rda 28 0 2 40 4
tdd 0 0 0 0 -
jhublogin 278 0 0 278 278
system 0 0 0 0 -
S9227431 0 0 0 0 -
S194026 35 8 0 35 1
Updated 1:55 am MDT Sat Apr 27 2024

V100 node status

V100 Node State # GPUs # Used # Avail
casper36 job-busy,resv-exclusive 4 0 4
casper08 free 8 5 3
casper29 free 4 1 3
casper30 free 8 0 8
casper31 free 8 0 8
casper09 down 4 0 4
casper24 free 8 0 8
casper25 free 4 0 4
casper28 free 8 0 8
casper27 down 8 0 8
Updated 1:55 am MDT Sat Apr 27 2024
A100 Node State # GPUs # Used # Avail
casper18 full 1 1 0
casper19 free 1 0 1
casper21 full 1 1 0
casper38 free 4 0 4
casper39 free 4 0 4
casper40 free 4 0 4
casper41 free 4 0 4
casper42 free 4 0 4
casper43 free 4 0 4
casper44 free 4 0 4
casper37 free 4 0 4
Updated 1:55 am MDT Sat Apr 27 2024

GLADE file spaces and Campaign Storage

Individual files are removed from the /glade/scratch space automatically if they have not been accessed (for example: modified, read, or copied) in more than 120 days.

File Space TiB Used TiB Capacity % Used
/glade/u/home 82 150 55%
/glade/u/apps 2 10 18%
/glade/work 1,088 4,096 27%
/glade/derecho/scratch 24,353 55,814 45%
/glade/campaign 109,204 123,593 89%
Updated 1:00 am MDT Sat Apr 27 2024