System Status

The tables below are updated every few minutes to show the current status of our supercomputing, data analysis, visualization resources, and other specialized nodes and systems. If the tables indicate a system is unavailable, check your email for timely updates via our Notifier service. Scheduled maintenance downtimes are announced in advance in the Daily Bulletin as well as in Notifier emails.

Derecho node status and queue activity

Last updated: 12:50 am MDT Thu Apr 30 2026 Up to date

Login Node Status Users Load
derecho1 UP 67 6.51 %
derecho2 UP 76 5.87 %
derecho3 UP 56 5.12 %
derecho4 UP 61 1.96 %
derecho5 UP 69 7.35 %
derecho6 UP 57 1.20 %
derecho7 UP 36 5.51 %
derecho8 UP - 1.12 %
CPU Nodes
Offline 5 ( 0.2 %)
Running Jobs 2112 ( 84.9 %)
Free 371 ( 14.9 %)
GPU Nodes
Running Jobs 81 ( 98.8 %)
Free 1 ( 1.2 %)
Updated 12:50 am MDT Thu Apr 30 2026

Last updated: 12:50 am MDT Thu Apr 30 2026 Up to date

Queue Jobs Running Jobs Queued Jobs Held Nodes Used Users
cpu 127 2001 809 2100 -
gpu 44 16 14 79 -
system 0 0 0 0 -
hybrid 0 0 0 0 -
pcpu 0 2 0 0 -
pgpu 2 0 0 2 -
gpudev 1 0 0 1 -
cpudev 36 0 77 48 -
repair 0 0 0 0 -
Updated 12:50 am MDT Thu Apr 30 2026

Casper node status and queue activity

Last updated: 12:50 am MDT Thu Apr 30 2026 Up to date

Login Node Status Users Load
crlogin1 UP 61 14.2 %
crlogin2 UP 59 18.2 %
crlogin3 UP 56 21.7 %
HTC Nodes
844 of 5890 CPUs in use ( 14.3 %)
Partially Allocated 66 ( 55.9 %)
Free 52 ( 44.1 %)
Large Memory Nodes
Partially Allocated 1 ( 12.5 %)
Free 7 ( 87.5 %)
GP100 Visualization Nodes
3 of 54 GPU sessions in use ( 5.6 %)
Partially Allocated 3 ( 33.3 %)
Free 6 ( 66.7 %)
L40 Visualization Nodes
3 of 42 GPU sessions in use ( 7.1 %)
Partially Allocated 2 ( 33.3 %)
Free 4 ( 66.7 %)
V100 GPU Nodes
28 of 52 GPUs in use ( 53.8 %)
Partially Allocated 4 ( 50.0 %)
Fully Allocated 2 ( 25.0 %)
Free 2 ( 25.0 %)
A100 GPU Nodes
24 of 32 GPUs in use ( 75.0 %)
Partially Allocated 6 ( 75.0 %)
Offline 2 ( 25.0 %)
H100 GPU Nodes
5 of 8 GPUs in use ( 62.5 %)
Partially Allocated 2 (100.0 %)
RDA Nodes
Partially Allocated 8 (100.0 %)
JupyterHub Login Nodes
Partially Allocated 10 ( 58.8 %)
Fully Allocated 7 ( 41.2 %)
Updated 12:50 am MDT Thu Apr 30 2026

Last updated: 12:50 am MDT Thu Apr 30 2026 Up to date

Queue Jobs Running Jobs Queued Jobs Held CPUs Used Users
htc 320 17 3 916 -
vis 6 0 0 34 -
largemem 1 0 0 21 -
jhublogin 270 2 0 540 -
system 0 0 0 0 -
gdex 64 0 0 172 -
nvgpu 33 20 0 247 -
amdgpu 0 0 0 0 -
Updated 12:50 am MDT Thu Apr 30 2026

V100 node status

Last updated: 12:50 am MDT Thu Apr 30 2026 Up to date

V100 Node State # GPUs # Used # Avail
casper30 free 8 4 4
casper31 full 8 8 0
casper24 full 8 8 0
casper28 free 8 5 3
casper36 free 4 2 2
casper08 free 8 0 8
casper29 free 4 1 3
casper25 free 4 0 4
Updated 12:50 am MDT Thu Apr 30 2026
A100 Node State # GPUs # Used # Avail
casper38 full 4 4 0
casper42 full 4 4 0
casper37 offline 4 0 4
casper39 offline 4 0 4
casper40 full 4 4 0
casper43 full 4 4 0
casper41 full 4 4 0
casper44 full 4 4 0
Updated 12:50 am MDT Thu Apr 30 2026
H100 Node State # GPUs # Used # Avail
casper57 free 4 1 3
casper58 full 4 4 0
Updated 12:50 am MDT Thu Apr 30 2026

GLADE file spaces and Campaign Storage

Individual files are removed from the /glade/scratch space automatically if they have not been accessed (for example: modified, read, or copied) in more than 120 days.

Last updated: 12:00 am MDT Thu Apr 30 2026 Up to date

File Space TiB Used TiB Capacity % Used
/glade/u/home 110 150 74%
/glade/u/apps 3 10 27%
/glade/work 1,600 4,096 40%
/glade/derecho/scratch 33,303 55,814 61%
/glade/campaign 119,968 135,030 89%
Updated 12:00 am MDT Thu Apr 30 2026