System Status

The tables below are updated every few minutes to show the current status of our supercomputing, data analysis, visualization resources, and other specialized nodes and systems. If the tables indicate a system is unavailable, check your email for timely updates via our Notifier service. Scheduled maintenance downtimes are announced in advance in the Daily Bulletin as well as in Notifier emails.

Derecho node status and queue activity

Login Node Status Users Load
derecho1 UP 64 1.09 %
derecho2 UP 46 2.65 %
derecho3 UP 41 1.84 %
derecho4 UP 52 3.13 %
derecho5 UP 43 0.44 %
derecho6 UP 49 1.98 %
derecho7 UP 35 0.16 %
derecho8 UP - 0.01 %
CPU Nodes
Offline 38 ( 1.5 %)
Running Jobs 1277 ( 51.3 %)
Free 1173 ( 47.1 %)
GPU Nodes
Offline 2 ( 2.4 %)
Running Jobs 21 ( 25.6 %)
Free 59 ( 72.0 %)
Updated 8:50 pm MDT Sun Jun 16 2024
Queue Jobs Running Jobs Queued Jobs Held Nodes Used Users
cpu 176 0 268 917 71
gpu 9 0 30 9 4
system 0 0 0 0 -
hybrid 0 0 0 0 -
pcpu 3 0 0 365 2
pgpu 5 0 0 14 3
gpudev 0 0 0 0 -
cpudev 1 0 3 1 4
repair 0 0 0 0 -
jhub 0 0 0 0 -
Updated 8:50 pm MDT Sun Jun 16 2024

Casper node status and queue activity

Login Node Status Users Load
casper-login1 UP 105 34.9 %
casper-login2 UP - 0.0 %
HTC Nodes
606 of 2232 CPUs in use ( 27.2 %)
Partially Allocated 25 ( 40.3 %)
Offline 1 ( 1.6 %)
Free 36 ( 58.1 %)
Large Memory Nodes
Partially Allocated 1 ( 50.0 %)
Free 1 ( 50.0 %)
GP100 Visualization Nodes
5 of 48 GPU sessions in use ( 10.4 %)
Partially Allocated 2 ( 25.0 %)
Free 6 ( 75.0 %)
V100 GPU Nodes
14 of 64 GPUs in use ( 21.9 %)
Partially Allocated 4 ( 40.0 %)
Offline 1 ( 10.0 %)
Free 5 ( 50.0 %)
A100 GPU Nodes
4 of 35 GPUs in use ( 11.4 %)
Partially Allocated 1 ( 9.1 %)
Offline 2 ( 18.2 %)
Free 8 ( 72.7 %)
RDA Nodes
Partially Allocated 3 ( 60.0 %)
Fully Allocated 1 ( 20.0 %)
Free 1 ( 20.0 %)
JupyterHub Login Nodes
Partially Allocated 6 ( 66.7 %)
Fully Allocated 3 ( 33.3 %)
Updated 8:45 pm MDT Sun Jun 16 2024
Queue Jobs Running Jobs Queued Jobs Held CPUs Used Users
htc 97 11 101 606 56
vis 5 9 0 12 4
largemem 1 0 0 2 1
gpgpu 8 0 1 40 5
rda 63 0 7 79 5
tdd 0 0 0 0 -
jhublogin 306 0 0 574 306
system 0 0 0 0 -
S9227431 0 0 0 0 -
h100 0 0 0 0 -
Updated 8:45 pm MDT Sun Jun 16 2024

V100 node status

V100 Node State # GPUs # Used # Avail
casper36 free 4 2 2
casper08 free 8 5 3
casper29 free 4 3 1
casper30 free 8 0 8
casper31 free 8 0 8
casper09 full 4 4 0
casper24 free 8 0 8
casper25 free 4 0 4
casper28 free 8 0 8
casper27 down 8 0 8
Updated 8:45 pm MDT Sun Jun 16 2024
A100 Node State # GPUs # Used # Avail
casper18 free 1 0 1
casper19 down 1 0 1
casper21 free 1 0 1
casper38 full 4 4 0
casper39 free 4 0 4
casper40 free 4 0 4
casper41 free 4 0 4
casper42 free 4 0 4
casper43 free 4 0 4
casper44 down 4 0 4
casper37 free 4 0 4
Updated 8:45 pm MDT Sun Jun 16 2024

GLADE file spaces and Campaign Storage

Individual files are removed from the /glade/scratch space automatically if they have not been accessed (for example: modified, read, or copied) in more than 120 days.

File Space TiB Used TiB Capacity % Used
/glade/u/home 85 150 57%
/glade/u/apps 2 10 18%
/glade/work 1,158 4,096 29%
/glade/derecho/scratch 26,502 55,814 48%
/glade/campaign 108,644 123,593 88%
Updated 8:00 pm MDT Sun Jun 16 2024