The Daily Bulletin delivers news about CISL’s high-performance computing environment, training opportunities, scheduled maintenance, and other events of interest to the user community. To get the Daily Bulletin by email, use this link to subscribe.
Cheyenne, Casper, GLADE, Campaign Storage, JupyterHub, HPSS, Stratus, and Quasar will be down from 06:00 MST on March 9th until 21:00 MST on March 10th in support of network upgrades.
CISL documentation for compiling code to run on the Cheyenne and Casper systems has been updated because of changes to the PGI compiler. It has become the NVIDIA HPC (nvhpc) compiler and all future versions will be released as such. The updated Compiling code documentation advises PGI users to transition to the nvhpc compiler, which has no license limitations, although the PGI compiler remains ...
New CISL documentation is now available to help users leverage MATLAB’s Parallel Computing Toolbox and Parallel Server to speed up their computations on the Cheyenne system. The toolbox enables single-node parallelism, while the server enables parallelism across nodes. Some setup is required to correctly and optimally use these capabilities on Cheyenne. The documentation shows how to configur...
CISL is now accepting large-scale allocation requests from university-based researchers for the 5.34-petaflops Cheyenne supercomputer and the Casper data analysis and visualization cluster. Those requests are due March 22. Cheyenne will continue in operation through mid-2022. Allocations for NCAR’s recently announced next-generation supercomputer that will become operational early in 2022 will ...
NCAR researchers and computational scientists are encouraged to submit requests for NCAR Strategic Capability (NSC) projects to be run on the Cheyenne system. Requests will be accepted through March 24. NSC allocations target large-scale projects lasting one year to a few years that align with NCAR’s scientific priorities and strategic plans. Cheyenne will continue in operation through mid-2022...
The previously announced transition of the Casper cluster’s scheduler from Slurm to the PBS Pro workload manager will begin with the addition of 64 new high-throughput computing (HTC) nodes during the March 9-10 scheduled maintenance downtime. The HTC nodes will be accessible only through PBS Pro. The other Casper nodes will be transitioned to PBS Pro over the next several weeks according to th...