The Globally Accessible Data Environment – a centralized file service known as GLADE – uses high-performance GPFS shared file system technology to give users a common view of their data across the HPC, analysis, and visualization resources that CISL manages.
|50 GB||Yes||Not purged||User home directory
|10 TB||No||120 days||Temporary computational space
|1 TB||No||Not purged||User work space
|N/A||No||Not purged||Project space allocations (via allocation request)
|NA||No||Not purged||Curated collections
(CMIP, RDA, others)
|GLADE system status report|
CISL backs up the GLADE home file space several times a week and also creates snapshots to enable users to recover deleted files quickly and easily. Data can remain in each of these spaces in accordance with the policies detailed below. The policies are subject to change; any changes necessary will be announced in advance.
CISL does not provide backups of other spaces. You are responsible for the safe storage of any data that must be preserved.
Best practice: Check your space usage regularly with gladequota as described below, and remove data that you no longer need.
You can conserve GLADE space by storing large files, such as tar files, rather than numerous small, individual files. This is because the system allocates a minimum amount of space for each file, no matter how small it is. This list shows the block size and the sub-block size for each space. Sub-block size is the smallest amount of space the system can allocate to a file, including directories and symlinks.
These differences in block and sub-block sizes may cause small files and sets of files to occupy different amounts space depending on where they are. For example, a 4 KB file will use 32 KB of quota space in home and 16 KB in scratch.
Each user has a /glade/u/home/username space with a quota of 50 GB* for managing scripts, source code, and small data sets. It is backed up. CISL also creates snapshots of the space to enable users to recover deleted files quickly and easily.
Each user has a /glade/scratch/username space by default, with an individual quota of 10 TB. The scratch file space is intended to support output from large-scale capability runs as well as computing and analysis workflows across CISL-managed resources. It is a temporary space for data to be analyzed and removed within a short amount of time.
If you will need to occupy more than your quota of scratch space at some point, contact the NCAR Research Computing help desk to request a temporary increase. Include a paragraph justifying your need for additional space when making your request.
Individual files are removed from the scratch space automatically if they have not been accessed (for example: modified, read, or copied) in more than 120 days. A file's access time (atime) is updated at most once per day for purposes of I/O efficiency. To check a file's atime, run ls -ul filename.
Users may not run “touch” commands or similar commands for the purpose of altering their files' timestamps to circumvent this purge policy. CISL staff will reduce the scratch quotas of users who violate this policy; running jobs may be killed as a result.
Best practice: To help us avoid the need to shorten the retention period, please use this space conscientiously.
Delete files that you no longer need as soon as you're done with them rather than leave large amounts of data sitting untouched for the full 120 days. If you need to retain data on disk for more than 120 days, consider using your /glade/work space or Campaign Storage.
Your /glade/work/username space is best suited for actively working with data sets over time periods greater than what is permitted in the scratch space.
The default quota for these spaces is 1 TB.
Dedicated project spaces are available through our allocations process to support longer-term disk needs that are not easily accommodated by the scratch or work spaces. Allocations for project spaces are made to collaborative groups of users through the University/CHAP, CSL, or NCAR allocations processes. The allocations are based on project needs and resource availability. Requests are reviewed according to the various allocation schedules.
If you have a user account and project space but lack the directory permissions you need for that space, contact the NCAR Research Computing help desk to request changes. Identify the directories and the permissions you are requesting.
CISL generates weekly usage reports in /glade/p to help users manage their data. The reports provide a summary of when files were last accessed, how much space is used, and details for the top 25 users. The files are named access_report.txt and can be found in:
CISL support staff will also be regularly reviewing these access reports to identify projects who are using project space to store files for long periods and may be better served with an NCAR Campaign Storage allocation.
Plans to begin enforcement of a one-year purge policy have been postponed indefinitely.
Project spaces and others are not intended as permanent storage, and the limited disk is a community resource. These spaces should be used as efficiently as possible so that other projects can take advantage of the storage resource.
All files that you own are counted against your GLADE quota, regardless of the directory in which they are stored. If you write files to another user's home or scratch space, for example, they still count against your own individual user quota for that space.
If you reach your disk quotas for the GLADE file spaces (see gladequota below), you may encounter problems until you remove files to make more space available. For example, you may not be able to log in, the system may appear hung, you may not be able to access some of your directories or files, your batch jobs may fail, and commands may not work as expected.
If you cannot log in or execute commands, contact the NCAR Research Computing help desk. You can check your space usage as shown below.
This command will generate a report showing your quota and usage information:
* Output from the gladequota command will show the home space quota as 100 GB instead of 50 GB. This is because the system stores dual copies of users' data for increased data integrity and safety. In some circumstances, queries of storage utilization from du and ls will also report a duplicated data footprint in your home directory for the same reason.