System Status

The tables below are updated every few minutes to show the current status of our supercomputing, data analysis, visualization resources, and other specialized nodes and systems. If the tables indicate a system is unavailable, check your email for timely updates via our Notifier service. Scheduled maintenance downtimes are announced in advance in the Daily Bulletin as well as in Notifier emails.

Derecho node status and queue activity

Last updated: 11:00 am MDT Thu Apr 9 2026 Up to date

Login Node Status Users Load
derecho1 UP 91 4.62 %
derecho2 UP 72 9.39 %
derecho3 UP 62 3.25 %
derecho4 UP 72 5.23 %
derecho5 UP 80 13.43 %
derecho6 UP 55 3.59 %
derecho7 UP 91 7.02 %
derecho8 UP 0 0.00 %
CPU Nodes
Running Jobs 2478 ( 99.6 %)
Free 10 ( 0.4 %)
GPU Nodes
Running Jobs 70 ( 85.4 %)
Free 12 ( 14.6 %)
Updated 11:00 am MDT Thu Apr 9 2026

Last updated: 11:00 am MDT Thu Apr 9 2026 Up to date

Queue Jobs Running Jobs Queued Jobs Held Nodes Used Users
cpu 771 360 1307 2475 -
gpu 32 2 27 65 -
system 0 0 0 0 -
hybrid 0 0 0 0 -
pcpu 0 0 0 0 -
pgpu 0 0 0 0 -
gpudev 4 0 0 6 -
cpudev 16 3 27 19 -
repair 0 0 0 0 -
Updated 11:00 am MDT Thu Apr 9 2026

Casper node status and queue activity

Last updated: 11:00 am MDT Thu Apr 9 2026 Up to date

Login Node Status Users Load
crlogin1 UP 88 27.7 %
crlogin2 UP 103 12.4 %
crlogin3 UP 116 32.1 %
HTC Nodes
1528 of 5812 CPUs in use ( 26.3 %)
Partially Allocated 69 ( 58.5 %)
Fully Allocated 3 ( 2.5 %)
Offline 2 ( 1.7 %)
Free 44 ( 37.3 %)
Large Memory Nodes
Partially Allocated 6 ( 75.0 %)
Free 2 ( 25.0 %)
GP100 Visualization Nodes
1 of 54 GPU sessions in use ( 1.9 %)
Partially Allocated 1 ( 11.1 %)
Free 8 ( 88.9 %)
L40 Visualization Nodes
3 of 42 GPU sessions in use ( 7.1 %)
Partially Allocated 1 ( 16.7 %)
Free 5 ( 83.3 %)
V100 GPU Nodes
10 of 52 GPUs in use ( 19.2 %)
Partially Allocated 4 ( 50.0 %)
Free 4 ( 50.0 %)
A100 GPU Nodes
24 of 32 GPUs in use ( 75.0 %)
Partially Allocated 8 (100.0 %)
H100 GPU Nodes
6 of 8 GPUs in use ( 75.0 %)
Partially Allocated 1 ( 50.0 %)
Fully Allocated 1 ( 50.0 %)
RDA Nodes
Partially Allocated 4 ( 50.0 %)
Fully Allocated 1 ( 12.5 %)
Free 3 ( 37.5 %)
JupyterHub Login Nodes
Partially Allocated 3 ( 17.6 %)
Fully Allocated 14 ( 82.4 %)
Updated 11:00 am MDT Thu Apr 9 2026

Last updated: 11:00 am MDT Thu Apr 9 2026 Up to date

Queue Jobs Running Jobs Queued Jobs Held CPUs Used Users
htc 850 8 27 1536 -
vis 4 0 0 11 -
largemem 6 0 0 21 -
jhublogin 286 1 0 572 -
system 0 0 0 0 -
gdex 33 1 0 106 -
nvgpu 28 16 0 265 -
amdgpu 0 0 0 0 -
Updated 11:00 am MDT Thu Apr 9 2026

V100 node status

Last updated: 11:00 am MDT Thu Apr 9 2026 Up to date

V100 Node State # GPUs # Used # Avail
casper30 free 8 1 7
casper31 free 8 0 8
casper24 free 8 0 8
casper28 free 8 0 8
casper36 free 4 1 3
casper08 free 8 0 8
casper29 full 4 4 0
casper25 full 4 4 0
Updated 11:00 am MDT Thu Apr 9 2026
A100 Node State # GPUs # Used # Avail
casper38 free 4 1 3
casper42 full 4 4 0
casper37 free 4 2 2
casper39 full 4 4 0
casper40 full 4 4 0
casper43 full 4 4 0
casper41 full 4 4 0
casper44 free 4 1 3
Updated 11:00 am MDT Thu Apr 9 2026
H100 Node State # GPUs # Used # Avail
casper57 full 4 4 0
casper58 free 4 2 2
Updated 11:00 am MDT Thu Apr 9 2026

GLADE file spaces and Campaign Storage

Individual files are removed from the /glade/scratch space automatically if they have not been accessed (for example: modified, read, or copied) in more than 120 days.

Last updated: 11:00 am MDT Thu Apr 9 2026 Up to date

File Space TiB Used TiB Capacity % Used
/glade/u/home 108 150 72%
/glade/u/apps 3 10 27%
/glade/work 1,588 4,096 39%
/glade/derecho/scratch 32,877 55,814 60%
/glade/campaign 120,214 135,030 90%
Updated 11:00 am MDT Thu Apr 9 2026