The Daily Bulletin delivers news about CISL’s high-performance computing environment, training opportunities, scheduled maintenance, and other events of interest to the user community. To get the Daily Bulletin by email, use this link to subscribe.
Two members of the Computational and Information Systems Lab (CISL) staff are featured in a series of Women’s History Month profiles titled “SuperComputing 2023 Celebrates Thirty-One Women in HPC.” Sheri Mickelson, a software engineer and group manager in CISL’s Technology Development Division, and Summer Wasson, education and multimedia specialist in the High Performance Computing Division, we...
The UCAR Software Engineering Assembly (SEA) will hold a general meeting of members and prospective members from 10 to 11 a.m. MDT on Thursday, April 6. The meeting will be virtual and all UCAR/NCAR staff members who are interested in scientific software engineering are welcome. You do not need to be classified as a “software engineer” to be welcome in the SEA. In the meeting, the current execu...
The Cheyenne cluster is back online after a regional power incident disrupted operations at the NCAR-Wyoming Supercomputing Center on Saturday, March 18, from approximately 10 a.m to 7:30 p.m. MDT. Cheyenne is operating with one switch offline and one switch in a degraded state as a result of the power disruption. CISL engineers have discovered that certain applications may experience reduced ...
During the week of March 13, the NWSC-3 project team continued resolving a myriad number of hardware and software issues encountered during the initial benchmark runs and NCAR team’s health checks. Later this week, the Spectrum Scale file system will be mounted on Derecho, allowing the consulting team to start building the software stack which will be used during the Acceptance Test Plan (ATP) ...
Stratus will be down between 7 AM to 7PM Tuesday, March 21 for an upgrade. No downtime is planned for Cheyenne, Casper, GLADE, Campaign Storage, Quasar, or JupyterHub.
CISL recommends running small jobs that use only CPUs on the Casper cluster’s high-throughput computing (HTC) nodes. Casper has 64 HTC nodes specifically for running small batch jobs. It also has: • More shared resources than the Cheyenne share queue. • A far higher concurrent-use limit for CPU cores than Cheyenne: 468 vs. 36. • More available memory, plus NVMe swap for overflow and local stora...