System Status

The tables below are updated every few minutes to show the current status of our supercomputing, data analysis, visualization resources, and other specialized nodes and systems. If the tables indicate a system is unavailable, check your email for timely updates via our Notifier service. Scheduled maintenance downtimes are announced in advance in the Daily Bulletin as well as in Notifier emails.

Derecho node status and queue activity

Login Node Status Users Load
derecho1 UP 106 10.93 %
derecho2 UP 113 5.12 %
derecho3 UP 91 10.69 %
derecho4 UP 123 13.65 %
derecho5 UP 124 6.35 %
derecho6 UP 112 7.69 %
derecho7 UP 106 9.68 %
derecho8 UP 0 0.04 %
CPU Nodes
Offline 5 ( 0.2 %)
Running Jobs 2466 ( 99.1 %)
Free 17 ( 0.7 %)
GPU Nodes
Offline 4 ( 4.9 %)
Running Jobs 77 ( 93.9 %)
Free 1 ( 1.2 %)
Updated 1:25 pm MDT Tue Oct 22 2024
Queue Jobs Running Jobs Queued Jobs Held Nodes Used Users
cpu 384 1215 382 2472 162
gpu 21 30 42 77 11
system 0 0 0 0 -
hybrid 0 0 0 0 -
pcpu 0 0 0 0 -
pgpu 0 0 0 0 -
gpudev 0 0 0 0 -
cpudev 13 7 19 13 16
repair 0 0 0 0 -
jhub 0 0 0 0 -
Updated 1:25 pm MDT Tue Oct 22 2024

Casper node status and queue activity

Login Node Status Users Load
casper-login1 UP 142 32.7 %
casper-login2 UP - 0.0 %
HTC Nodes
681 of 2232 CPUs in use ( 30.5 %)
Partially Allocated 41 ( 66.1 %)
Fully Allocated 2 ( 3.2 %)
Offline 1 ( 1.6 %)
Free 18 ( 29.0 %)
Large Memory Nodes
Partially Allocated 3 ( 37.5 %)
Free 5 ( 62.5 %)
GP100 Visualization Nodes
6 of 48 GPU sessions in use ( 12.5 %)
Partially Allocated 2 ( 25.0 %)
Free 6 ( 75.0 %)
L40 Visualization Nodes
0 of 36 GPU sessions in use ( 0.0 %)
Free 6 (100.0 %)
V100 GPU Nodes
22 of 56 GPUs in use ( 39.3 %)
Partially Allocated 5 ( 55.6 %)
Offline 1 ( 11.1 %)
Free 3 ( 33.3 %)
A100 GPU Nodes
12 of 35 GPUs in use ( 34.3 %)
Partially Allocated 3 ( 27.3 %)
Fully Allocated 2 ( 18.2 %)
Offline 1 ( 9.1 %)
Free 5 ( 45.5 %)
H100 GPU Nodes
4 of 8 GPUs in use ( 50.0 %)
Partially Allocated 1 ( 50.0 %)
Offline 1 ( 50.0 %)
RDA Nodes
Partially Allocated 3 ( 60.0 %)
Free 2 ( 40.0 %)
JupyterHub Login Nodes
Partially Allocated 4 ( 44.4 %)
Fully Allocated 4 ( 44.4 %)
Offline 1 ( 11.1 %)
Updated 1:25 pm MDT Tue Oct 22 2024
Queue Jobs Running Jobs Queued Jobs Held CPUs Used Users
htc 297 0 56 537 115
vis 6 0 0 21 6
largemem 4 0 0 5 4
gpgpu 6 0 0 18 4
rda 12 0 0 55 3
tdd 1 0 0 8 1
jhublogin 305 0 0 573 305
system 0 0 0 0 -
h100 1 0 0 4 1
l40 0 0 0 0 -
S2712702 0 0 0 0 -
Updated 1:25 pm MDT Tue Oct 22 2024

V100 node status

V100 Node State # GPUs # Used # Avail
casper36 full 4 4 0
casper08 full 8 8 0
casper29 full 4 4 0
casper30 free 8 2 6
casper31 free 8 0 8
casper09 down 4 0 4
casper24 free 8 0 8
casper25 full 4 4 0
casper28 free 8 0 8
Updated 1:25 pm MDT Tue Oct 22 2024
A100 Node State # GPUs # Used # Avail
casper18 full 1 1 0
casper19 offline 1 0 1
casper21 free 1 0 1
casper38 free 4 2 2
casper39 full 4 4 0
casper40 full 4 4 0
casper41 job-exclusive 4 1 3
casper42 free 4 0 4
casper43 free 4 0 4
casper44 free 4 0 4
casper37 free 4 0 4
Updated 1:25 pm MDT Tue Oct 22 2024
H100 Node State # GPUs # Used # Avail
casper57 down 4 0 4
casper58 full 4 4 0
Updated 1:25 pm MDT Tue Oct 22 2024

GLADE file spaces and Campaign Storage

Individual files are removed from the /glade/scratch space automatically if they have not been accessed (for example: modified, read, or copied) in more than 120 days.

File Space TiB Used TiB Capacity % Used
/glade/u/home 91 150 61%
/glade/u/apps 2 10 19%
/glade/work 1,269 4,096 31%
/glade/derecho/scratch 27,534 55,814 50%
/glade/campaign 113,080 125,240 91%
Updated 1:00 pm MDT Tue Oct 22 2024