System Status

The tables below are updated every few minutes to show the current status of our supercomputing, data analysis, visualization resources, and other specialized nodes and systems. If the tables indicate a system is unavailable, check your email for timely updates via our Notifier service. Scheduled maintenance downtimes are announced in advance in the Daily Bulletin as well as in Notifier emails.

Derecho node status and queue activity

Login Node Status Users Load
derecho1 UP 81 2.73 %
derecho2 UP 83 9.27 %
derecho3 UP 73 5.86 %
derecho4 UP 92 7.55 %
derecho5 UP 84 5.02 %
derecho6 UP 79 4.54 %
derecho7 UP 95 8.97 %
derecho8 UP 0 0.01 %
CPU Nodes
Reserved 3 ( 0.1 %)
Offline 20 ( 0.8 %)
Running Jobs 2423 ( 97.4 %)
Free 42 ( 1.7 %)
GPU Nodes
Offline 3 ( 3.7 %)
Running Jobs 37 ( 45.1 %)
Free 42 ( 51.2 %)
Updated 10:35 am MDT Wed May 14 2025
Queue Jobs Running Jobs Queued Jobs Held Nodes Used Users
cpu 743 975 716 2424 176
gpu 10 0 21 37 8
system 0 0 0 0 -
hybrid 0 0 0 0 -
pcpu 0 16 0 0 1
pgpu 0 0 0 0 -
gpudev 0 0 0 0 -
cpudev 6 0 25 7 18
repair 0 0 0 0 -
jhub 0 0 0 0 -
R9234219 1 0 0 3 1
S9239109 0 0 0 0 -
S9239112 0 0 0 0 -
Updated 10:35 am MDT Wed May 14 2025

Casper node status and queue activity

Login Node Status Users Load
casper-login1 UP 145 59.3 %
casper-login2 UP - 0.0 %
HTC Nodes
1245 of 6202 CPUs in use ( 20.1 %)
Partially Allocated 43 ( 34.1 %)
Reserved 7 ( 5.6 %)
Fully Allocated 1 ( 0.8 %)
Offline 2 ( 1.6 %)
Free 73 ( 57.9 %)
Large Memory Nodes
Partially Allocated 3 ( 37.5 %)
Reserved 3 ( 37.5 %)
Free 2 ( 25.0 %)
GP100 Visualization Nodes
7 of 48 GPU sessions in use ( 14.6 %)
Partially Allocated 2 ( 25.0 %)
Free 6 ( 75.0 %)
L40 Visualization Nodes
0 of 42 GPU sessions in use ( 0.0 %)
Free 6 (100.0 %)
V100 GPU Nodes
5 of 52 GPUs in use ( 9.6 %)
Partially Allocated 2 ( 25.0 %)
Reserved 2 ( 25.0 %)
Fully Allocated 1 ( 12.5 %)
Free 3 ( 37.5 %)
A100 GPU Nodes
6 of 34 GPUs in use ( 17.6 %)
Partially Allocated 2 ( 20.0 %)
Reserved 4 ( 40.0 %)
Offline 2 ( 20.0 %)
Free 2 ( 20.0 %)
H100 GPU Nodes
0 of 8 GPUs in use ( 0.0 %)
Reserved 1 ( 50.0 %)
Free 1 ( 50.0 %)
RDA Nodes
Partially Allocated 3 ( 60.0 %)
Fully Allocated 1 ( 20.0 %)
Free 1 ( 20.0 %)
JupyterHub Login Nodes
Partially Allocated 1 ( 11.1 %)
Fully Allocated 8 ( 88.9 %)
Updated 10:35 am MDT Wed May 14 2025
Queue Jobs Running Jobs Queued Jobs Held CPUs Used Users
htc 698 35 9 993 126
vis 7 0 0 20 6
largemem 3 0 0 15 3
gpgpu 0 3 0 0 1
rda 17 0 0 79 6
tdd 1 0 0 8 1
jhublogin 319 0 0 587 319
system 0 0 0 0 -
h100 0 0 0 0 -
l40 0 0 0 0 -
S3011784 0 0 0 0 -
S4456256 0 0 0 0 -
S4989040 11 0 0 77 11
R4991181 0 0 0 0 -
Updated 10:35 am MDT Wed May 14 2025

V100 node status

V100 Node State # GPUs # Used # Avail
casper36 job-busy,resv-exclusive 4 0 4
casper08 resv-exclusive 8 0 8
casper29 resv-exclusive 4 0 4
casper30 free 8 1 7
casper31 free 8 0 8
casper24 free 8 0 8
casper25 full 4 4 0
casper28 free 8 0 8
Updated 10:35 am MDT Wed May 14 2025
A100 Node State # GPUs # Used # Avail
casper18 free 1 0 1
casper21 free 1 0 1
casper38 free 4 3 1
casper39 offline 4 0 4
casper40 down 4 0 4
casper41 free 4 3 1
casper42 resv-exclusive 4 0 4
casper43 resv-exclusive 4 0 4
casper44 resv-exclusive 4 0 4
casper37 resv-exclusive 4 0 4
Updated 10:35 am MDT Wed May 14 2025
H100 Node State # GPUs # Used # Avail
casper57 free 4 0 4
casper58 resv-exclusive 4 0 4
Updated 10:35 am MDT Wed May 14 2025

GLADE file spaces and Campaign Storage

Individual files are removed from the /glade/scratch space automatically if they have not been accessed (for example: modified, read, or copied) in more than 120 days.

File Space TiB Used TiB Capacity % Used
/glade/u/home 97 150 65%
/glade/u/apps 2 10 18%
/glade/work 1,411 4,096 35%
/glade/derecho/scratch 25,372 55,814 46%
/glade/campaign 116,342 125,240 93%
Updated 10:00 am MDT Wed May 14 2025