The Daily Bulletin delivers news about CISL’s high-performance computing environment, training opportunities, scheduled maintenance, and other events of interest to the user community. To get the Daily Bulletin by email, use this link to subscribe.
Video and slides from the “Starting Casper Jobs with PBS Pro" tutorial are now available here in the CISL training library. The CISL Consulting Services Group presented this tutorial in March 2021 to help Casper users transition from the Slurm scheduler to PBS Pro. Users who have experience with PBS on Cheyenne also were encouraged to attend because there are a few differences between the PBS d...
CISL staff this morning finished converting Casper nodes from using Slurm to using the PBS Pro workload manager to start jobs. With that transition complete, users now submit jobs exclusively via PBS. That includes jobs started through the NCAR JupyterHub service. The following updated documentation provides details for preparing PBS scripts and submitting jobs on Casper: • Migrating Casper jo...
No scheduled downtime: Cheyenne, Casper, GLADE, Campaign Storage, Quasar, Stratus or HPSS systems
Acknowledging the support of NCAR and CISL computing when you publish research results helps ensure continued support from the National Science Foundation and other sources of funding for future high-performance computing (HPC) systems. It is also a requirement of receiving an allocation, as noted in your award letter. The reporting requirements, how to cite your use of various systems, and rec...
Welcome to the new CISL Daily Bulletin. We hope you like the new look and our new approach to serving our HPC user community. The redesigned newsletter is just the first step. It also connects you to the brand new NCAR Advanced Research Computing (ARC) portal. The ARC portal is where you can read the Daily Bulletin online, check the status of our Cheyenne and Casper clusters, and more. Take a l...
Additional nodes in the Casper cluster are now accessible using the PBS Pro workload manager as the migration from Slurm to PBS continues. In addition to the 64 new high-throughput computing (HTC) nodes announced recently, about half of the cluster’s other nodes have been transitioned as of today. PBS jobs can now run on: • 7 CPU-only nodes • 4 GP100 GPU nodes • 2 4xV100 GPU nodes • 3 8xV100 GP...