Nexus: Difference between revisions
Line 78: | Line 78: | ||
===Partition QoS=== | ===Partition QoS=== | ||
In addition to using QoS to provide limits on job size, SLURM can also have QoS assigned to partitions themselves. In Nexus, QoSes with any of the last four columns in the above listing (max jobs per user, max submit jobs per user, max TRES per user, max TRES for the group) defined are partition QoSes. | In addition to using QoS to provide limits on job size (termed as "job QoS"), SLURM can also have QoS assigned to partitions themselves, termed as "partition QoS". In Nexus, QoSes with any of the last four columns in the above listing (max jobs per user, max submit jobs per user, max TRES per user, max TRES for the group) defined are partition QoSes. | ||
For example, in the default non-preemption partition (<tt>tron</tt>), you are restricted to 32 total cores, 4 total GPUs, and 256GB total RAM at once across all jobs you have running in the job QoSes allowed by the partition. You also can only have a maximum of 80 jobs in the partition in the running (R) or pending (PD) states simultaneously. The latter is to prevent excess pending jobs in the tron partition blocking scavenger partition jobs from running for extended periods of time. | For example, in the default non-preemption partition (<tt>tron</tt>), you are restricted to 32 total cores, 4 total GPUs, and 256GB total RAM at once across all jobs you have running in the job QoSes allowed by the partition. You also can only have a maximum of 80 jobs in the partition in the running (R) or pending (PD) states simultaneously. The latter is to prevent excess pending jobs in the tron partition blocking scavenger partition jobs from running for extended periods of time. |
Revision as of 17:18, 8 June 2023
The Nexus is the combined scheduler of resources in UMIACS. Many of our existing computational clusters that have discrete schedulers will be folding into this scheduler in the future (see below). The resource manager for Nexus (as with our other existing computational clusters) is SLURM. Resources are arranged into partitions where users are able to schedule computational jobs. Users are arranged into a number of SLURM accounts based on faculty, lab, or center investments.
Getting Started
All accounts in UMIACS are sponsored. If you don't already have a UMIACS account, please see Accounts for information on getting one. You need a full UMIACS account (not a collaborator account) in order to access Nexus.
Access
Your access to submission nodes for Nexus computational resources are determined by your account sponsor's department, center, or lab affiliation. You can log into the UMIACS Directory CR application and select the Computational Resource (CR) in the list that has the prefix nexus
. The Hosts section lists your available submission nodes, generally a pair of nodes of the format nexus<department, lab, or center abbreviation>[00,01], e.g., nexuscfar00 and nexuscfar01.
Note - UMIACS requires multi-factor authentication through our Duo instance. This is completely discrete from both UMD's and CSD's Duo instances. You will need to enroll one or more devices to access resources in UMIACS, and will be prompted to enroll when you log into the Directory application for the first time.
Once you have identified your submission nodes, you can SSH directly into them. From there, you are able to submit to the cluster via our SLURM workload manager. You need to make sure that your submitted jobs have the correct account, partition, and qos.
Jobs
SLURM jobs are submitted by either srun
or sbatch
depending if you are doing an interactive job or batch job, respectively. You need to provide the where/how/who to run the job and specify the resources you need to run with.
For the where/how/who, you may be required to specify --partition
, --qos
, and/or --account
(respectively) to be able to adequately submit jobs to the Nexus.
For resources, you may need to specify --time
for time, --tasks
for CPUs, --mem
for RAM, and --gres=gpu
for GPUs in your submission arguments to meet your requirements. There are defaults for all four, so if you don't specify something, you may be scheduled with a very minimal set of time and resources (e.g., by default, NO GPUs are included if you do not specify --gres=gpu
). For more information about submission flags for GPU resources, see SLURM/JobSubmission#Requesting_GPUs. You can also can run man srun
on your submission node for a complete list of available submission arguments.
Interactive
Once logged into a submission node, you can run simple interactive jobs. If your session is interrupted from the submission node, the job will be killed. As such, we encourage use of a terminal multiplexer such as Tmux.
$ srun --pty --ntasks 4 --mem=2gb --gres=gpu:1 nvidia-smi -L GPU 0: NVIDIA RTX A4000 (UUID: GPU-ae5dc1f5-c266-5b9f-58d5-7976e62b3ca1)
Batch
Batch jobs are scheduled with a script file with an optional ability to embed job scheduling parameters via variables that are defined by #SBATCH
lines at the top of the file. You can find some examples in our SLURM/JobSubmission documentation.
Partitions
The SLURM resource manager uses partitions to act as job queues which can restrict size, time and user limits. The Nexus has a number of different partitions of resources. Different Centers, Labs, and Faculty are able to invest in computational resources that are restricted to approved users through these partitions.
Partitions usable by all non-class account users:
- Nexus/Tron - Pool of resources available to all UMIACS and CSD faculty and graduate students.
- Scavenger - Preemption partition that supports nodes from multiple other partitions. More resources are available to schedule simultaneously than in other partitions, however jobs are subject to preemption rules. You are responsible for ensuring your jobs handle this preemption correctly. The SLURM scheduler will simply restart a preempted job with the same submission arguments when it is available to run again.
Partitions usable by ClassAccounts:
- Class - Pool available for UMIACS class accounts sponsored by either UMIACS or CSD faculty.
Partitions usable by specific lab/center users:
- Nexus/CBCB - CBCB lab pool available for CBCB lab members.
- Nexus/CLIP - CLIP lab pool available for CLIP lab members.
- Nexus/Gamma - GAMMA lab pool available for GAMMA lab members.
- Nexus/MBRC - MBRC lab pool available for MBRC lab members.
- Nexus/MC2 - MC2 lab pool available for MC2 lab members.
Quality of Service (QoS)
SLURM uses Quality of Service (QoS) to provide limits on job sizes to users. Note that you should still try to only allocate the minimum resources for your jobs, as resources that each of your jobs schedules are counted against your FairShare priority in the future.
- default - Default job QoS. Limited to 4 cores, 32GB RAM, and 1 GPU per job. The maximum wall time per job is 3 days.
- medium - Limited to 8 cores, 64GB RAM, and 2 GPUs per job. The maximum wall time per job is 2 days.
- high - Limited to 16 cores, 128GB RAM, and 4 GPUs per job. The maximum wall time per job is 1 day.
- scavenger - Limited to 64 cores, 256GB RAM, and 8 GPUs per job. The maximum wall time per job is 2 days. Only 192 total cores, 768GB total RAM, and 24 total GPUs are permitted simultaneously across all of your jobs running with this job QoS. This job QoS is both only available in the scavenger partition and the only job QoS available in the scavenger partition. To use this job QoS, include
--partition=scavenger
and--account=scavenger
in your submission arguments. Do not include any job QoS argument other than--qos=scavenger
(optional) or submission will fail.
You can display these job QoSes from the command line using show_qos
command. Other partition QoSes (see below) or reserved QoSes may also appear in the listing. The above four job QoSes are the ones that everyone can submit using.
[root@nexusctl00 ~]# show_qos Name MaxWall MaxTRES MaxJobsPU MaxSubmitPU MaxTRESPU GrpTRES ------------ ----------- ------------------------------ --------- ----------- ------------------------------ -------------------- normal scavenger 2-00:00:00 cpu=64,gres/gpu=8,mem=256G cpu=192,gres/gpu=24,mem=768G medium 2-00:00:00 cpu=8,gres/gpu=2,mem=64G high 1-00:00:00 cpu=16,gres/gpu=4,mem=128G default 3-00:00:00 cpu=4,gres/gpu=1,mem=32G tron 80 cpu=32,gres/gpu=4,mem=256G huge-long 10-00:00:00 cpu=32,gres/gpu=8,mem=256G clip cpu=526,mem=5522G class cpu=32,gres/gpu=4,mem=256G gamma cpu=402,mem=3763G mc2 cpu=322,mem=3385G cbcb cpu=299,mem=15424G cpu=1014,mem=48092G highmem 21-00:00:00 cpu=32,mem=2000G mbrc cpu=250,mem=2630G
To find out what accounts and partitions you have access to, first use the show_assoc
command to show your account/job QoS combinations. Then, use the scontrol show partition
command and note the AllowAccounts entry for each listed partition. You are able to submit to any partition that allows an account that you have. If you need to use an account other than the default account nexus, you will need to specify an account via the --account
submission argument.
Partition QoS
In addition to using QoS to provide limits on job size (termed as "job QoS"), SLURM can also have QoS assigned to partitions themselves, termed as "partition QoS". In Nexus, QoSes with any of the last four columns in the above listing (max jobs per user, max submit jobs per user, max TRES per user, max TRES for the group) defined are partition QoSes.
For example, in the default non-preemption partition (tron), you are restricted to 32 total cores, 4 total GPUs, and 256GB total RAM at once across all jobs you have running in the job QoSes allowed by the partition. You also can only have a maximum of 80 jobs in the partition in the running (R) or pending (PD) states simultaneously. The latter is to prevent excess pending jobs in the tron partition blocking scavenger partition jobs from running for extended periods of time.
- If you need to submit more than 80 jobs in batch at once, you can develop and run an "outer submission script" that repeatedly attempts to run the "inner submission script" to submit jobs in the batch periodically, until all job submissions are successful. The outer submission script should use looping logic to check if you are at the max job limit and should then retry submission after waiting for some time interval. An example outer submission script is as follows. In this example,
example_inner.sh
is your inner submission script and you want to run 200 jobs.
#!/bin/bash numjobs=200 i=0 while [ $i -lt $numjobs ] do while [[ "$(sbatch example_inner.sh 2>&1)" =~ "QOSMaxSubmitJobPerUserLimit"]] do echo "Currently at maximum job submissions allowed." echo "Waiting for 60 seconds before trying to submit more jobs." sleep 60 done i=$(( $i + 1 )) echo "Submitted job $i of $numjobs" done
It is suggested that you run outer submission scripts in Tmux sessions to keep the terminal window executing them running from being interrupted.
Lab/group-specific partitions may also have partition QoSes intended to limit the total number of resources consumed by all users in that lab/group that are using the partition (codified by GrpTRES in the output above for the partition QoS name that matches the lab/group partition name). Note that the exact values above for TRES are not fixed and may fluctuate as more resources are added to various partitions.
Storage
All storage available in Nexus is currently NFS based. We will be introducing some changes for Phase 2 to support high performance GPUDirect Storage (GDS). These storage allocation procedures will be revised and approved by the launch of Phase 2 by a joint UMIACS and CSD faculty committee.
Home Directories
Home directories in the Nexus computational infrastructure are available from the Institute's NFShomes as /nfshomes/USERNAME
where USERNAME is your username. These home directories have very limited storage (20GB, cannot be increased) and are intended for your personal files, configuration and source code. Your home directory is not intended for data sets or other large scale data holdings. Users are encouraged to utilize our GitLab infrastructure to host your code repositories.
NOTE: To check your quota on this directory you will need to use the quota -s
command.
Your home directory data is fully protected and has both snapshots and is backed up nightly.
Other standalone compute clusters have begun to fold into partitions in Nexus. The corresponding home directories used by these clusters (if not /nfshomes
) will be gradually phased out in favor of the /nfshomes
home directories.
Scratch Directories
Scratch data has no data protection including no snapshots and the data is not backed up. There are two types of scratch directories in the Nexus compute infrastructure:
- Network scratch directories
- Local scratch directories
Please note that class accounts do not have network scratch directories.
Network Scratch Directories
You are allocated 200GB of scratch space via NFS from /fs/nexus-scratch/$username
. It is not backed up or protected in any way. This directory is automounted so you will need to cd
into the directory or request/specify a fully qualified file path to access this.
You can view your quota usage by running df -h /fs/nexus-scratch/$username
You may request a permanent increase of up to 400GB total space without any faculty approval by contacting staff. If you need space beyond 400GB, you will need faculty approval and/or a project allocation. If you choose to increase your scratch space beyond 400GB, the increased space is also subject to the 270 TB days limit mentioned in the project allocation section before we check back in for renewal. For example, if you request 1.4TB total space, you may have this for 270 days (1TB beyond the 400GB permanent increase).
This file system is available on all submission, data management, and computational nodes within the cluster.
Local Scratch Directories
Each computational node that you can schedule compute jobs on also has one or more local scratch directories. These are always named /scratch0
, /scratch1
, etc. These are almost always more performant than any other storage available to the job. However, you must stage their data within the confines of your job and stage the data out before the end of your job.
These local scratch directories have a tmpwatch job which will delete unaccessed data after 90 days, scheduled via maintenance jobs to run once a month during our monthly maintenance windows. Please make sure you secure any data you write to these directories at the end of your job.
Faculty Allocations
Each faculty member can be allocated 1TB of lab space upon request. We can also support grouping these individual allocations together into larger center, lab, or research group allocations if desired by the faculty. Please contact staff to inquire.
This lab space does not have snapshots by default (but are available if requested), but is backed up.
Project Allocations
Project allocations are available per user for 270 TB days; you can have a 1TB allocation for up to 270 days, a 3TB allocation for 90 days, etc.. A single faculty member can not have more than 20 TB of sponsored account project allocations active at any point.
The minimum storage space you can request (maximum length) is 500GB (540 days) and the minimum allocation length you can request (maximum storage) is 30 days (9TB).
To request an allocation, please contact staff with your account sponsor involved in the conversation. Please include the following details:
- Project Name (short)
- Description
- Size (1TB, 2TB, etc.)
- Length in days (270 days, 135 days, etc.)
- Other user(s) that need to access the allocation, if any
These allocations are available via /fs/nexus-projects/$project_name
. Renewal is not guaranteed to be available due to limits on the amount of total storage. Near the end of the allocation period, staff will contact you and ask if you are still in need of the storage allocation. If you are no longer in need of the storage allocation, you will need to relocate all desired data within 14 days of the end of the allocation period. Staff will then remove the allocation. If you do not respond to staff's request within 14 days of the end of the allocation period, staff will remove the allocation.
Datasets
We have read-only dataset storage available at /fs/nexus-datasets
. If there are datasets that you would like to see curated and available, please see this page.
We will have a more formal process to approve datasets by Phase 2 of Nexus.
Migrations
If you are a user of an existing cluster that is the process of being folded into Nexus now or in the near future, your cluster-specific migration information will be listed here.
- (n/a)