The Daily Bulletin delivers news about CISL’s high-performance computing environment, training opportunities, scheduled maintenance, and other events of interest to the user community. To get the Daily Bulletin by email, use this link to subscribe.
None planned for Cheyenne, Casper, GLADE, Campaign Storage, Quasar, Stratus, or JupyterHub.
HPC user documentation on NCAR’s Advanced Research Computing (ARC) portal is temporarily accessible only from UCAR networks, including the UCAR VPN for offsite access. This also applies to the related knowledge base and to wiki.ucar.edu. Due to an undisclosed security vulnerability with Atlassian's Confluence systems, the vendor recommended restricting access until the issue can be addressed. W...
New software releases have been installed on Cheyenne and Casper, including the following compilers and tools, listed with associated modules: • Arm Forge – arm-forge/22.0.2 • GCC – gnu/12.1.0 • NVIDIA HPC SDK – nvhpc/22.05 • Open MPI – openmpi/4.1.4 Standard libraries like netCDF, pnetCDF, FFTW, and PIO have been compiled with the new compiler and MPI versions. For GPU users, the new nvhpc mo...
Update: PBS was returned to service before 9:30 a.m. MDT. The PBS scheduling system for submitting Cheyenne jobs halted at approximately 7 a.m. MDT today. System engineers are troubleshooting the issue. As of 9 a.m. MDT, no estimate is available for when the system might be returned to service. Updates will be sent via our Notifier system when more information is available and when system fun...
CISL recommends running small jobs that use only CPUs on the Casper cluster’s high-throughput computing (HTC) nodes. Casper has 64 HTC nodes specifically for running small batch jobs. It also has: • More shared resources than the Cheyenne share queue. • A far higher concurrent-use limit for CPU cores than Cheyenne: 468 vs. 36. • More available memory, plus NVMe swap for overflow. See Starting ...