The Daily Bulletin delivers news about CISL’s high-performance computing environment, training opportunities, scheduled maintenance, and other events of interest to the user community. To get the Daily Bulletin by email, use this link to subscribe.
Recent upgrades to PBS Pro are bringing some changes and new capabilities to the scheduler, most prominently the addition of peer scheduling (coming October 20) between Cheyenne and Casper. Brian Vanderwende of CISL's Consulting Services Group will present a tutorial at 11 a.m. MST on Tuesday, November 9, to introduce new users to the scheduler and to update experienced users on these recent up...
None planned for Cheyenne, Casper, Glade, HPSS, Campaign Storage, Quasar, Stratus, or JupyterHub.
The new peer-scheduling capability enabling users to submit jobs from Cheyenne to Casper and from Casper to Cheyenne will be available on Wednesday, October 20. Users will also be able to craft job-dependency rules between jobs on both systems. Documentation for how to use the new features will be available by Wednesday. Also as of October 20, as announced last month, CISL is changing the Cheye...
HPE has made a container image of its Cray Programming Environment (CPE) available to help users prepare for the 2022 delivery of the new Derecho supercomputer. Cheyenne and Casper users can launch CPE by running the crayenv command from either system's login nodes, then build and run applications within the containerized environment as described in this new CISL documentation. To build large a...
With the upcoming deployment of Derecho in 2022, CISL continues to work with NVIDIA to encourage and support users who want to learn more about GPU programming and computing. As mentioned in previous announcements, NVIDIA regularly partners with institutions to conduct intensive code development and improvement events called GPU Hackathons. One such upcoming event hosted by NERSC in December wi...
CISL recommends running small jobs that use only CPUs on the Casper cluster’s high-throughput computing (HTC) nodes. Casper has 64 HTC nodes specifically for running small batch jobs. It also has: • More shared resources than the Cheyenne share queue • A far higher concurrent-use limit for CPU cores than Cheyenne: 468 vs. 36. • More available memory, plus NVMe swap for overflow. Here’s an ex...