The Nexus is the combined scheduler of resources in UMIACS. Many of our existing computational clusters that have discrete schedulers will be folding into this scheduler in the future. The resource manager for Nexus (as with our other existing computational clusters) is SLURM. Resources are arranged into partitions where users are able to schedule computational jobs. Users are arranged into a number of SLURM accounts based on faculty, lab, or center investments.
All accounts in UMIACS are sponsored. If you don't already have a UMIACS account, please see Nexus/Accounts for information on getting one.
The submission nodes for the Nexus computational resources are determined by department, center, or lab affiliation. You can log into the UMIACS Directory CR application and select the Computational Resource (CR) in the list that has the prefix
nexus. The Hosts section lists your available login nodes.
Note - UMIACS requires multi-factor authentication through our Duo instance. This is completely discrete from both UMD's and CSD's Duo instances. You will need to enroll one or more devices to access resources in UMIACS, and will be prompted to enroll when you log into the Directory application for the first time.
Once you have identified your submission nodes, you are able to SSH directly into them. From there, you are able to submit to the cluster via our SLURM workload manager. You need to make sure that your submitted jobs have the correct account, partition, and qos.
SLURM jobs are submitted by either
sbatch depending if you are doing an interactive job or batch job, respectively. You need to provide the where/who the job will run and specify the resources you need to run with. There are defaults for both, so if you don't specify something you may be scheduled with a very minimal set of time and resources (including NO GPUs unless specifically requested).
For the where/who, you may be required to specify
--partition to be able to adequately submit jobs to the Nexus.
For resources, you may need to specify for cpus (
--tasks), memory (
--mem), and GPUs (
--gres=gpu) in your submission arguments to meet your requirements. For more information about submitting for GPU resources, see SLURM/JobSubmission#Requesting_GPUs. You can also can run
man srun on your submission node for a complete list of available submission arguments.
Once logged into a submission node, you can run simple interactive jobs. If your session is interrupted from the submission node, the job will be killed. As such, we encourage use of a terminal multiplexer such as Tmux.
$ srun --pty --ntasks 4 --mem=2gb --gres=gpu:1 nvidia-smi -L GPU 0: NVIDIA RTX A4000 (UUID: GPU-ae5dc1f5-c266-5b9f-58d5-7976e62b3ca1)
Batch jobs are scheduled with a script file with an optional ability to embed job scheduling parameters via variables that are defined by
#SBATCH lines at the top of the file. You can find some examples in our SLURM/JobSubmission documentation.
The SLURM resource manager uses partitions to act as job queues which can restrict size, time and user limits. The Nexus (when fully operational) will have a number of different partitions of resources. Different Centers, Labs, and Faculty will be able to invest in computational resources that will be restricted to approved users through these partitions.
- Nexus/Tron - This is the pool of resources available to all UMIACS and CSD faculty and graduate students. It provides access for undergraduate and graduate teaching resources.
- Scavenger - This is a preemption partition that supports nodes from multiple other partitions. Jobs are subject to preemption rules however more resources are available to schedule simultaneously than in other partitions. You are responsible for ensuring your jobs handle this preemption correctly, as the SLURM scheduler will simply restart each job with the same submission arguments when preempted jobs are available to run again.
Quality of Service (QoS)
SLURM uses a QoS to provide limits on job sizes to users. Note that you should still try to only allocate the minimum resources for your jobs, as resources that each of your jobs schedules are counted against your FairShare priority in the future.
- default - Default QoS. Limited to 4 cores, 32GB RAM, and 1 GPU per job. The maximum wall time per job is 3 days and 4 jobs are permitted simultaneously.
- medium - Limited to 8 cores, 64GB RAM, and 2 GPUs per job . The maximum wall time per job is 2 days and 2 jobs are permitted simultaneously.
- high - Limited to 16 cores, 128GB RAM, and 4 GPUs per job. The maximum wall time per job is 1 day and only 1 job is permitted simultaneously.
- scavenger - Limited to 64 cores, 256GB RAM, and 8 GPUs per job. The maximum wall time per job is 2 days and only 16 GPUs are permitted simultaneously. This QoS is only available in the scavenger partition.
You can display these QoSes from the command line using
# show_qos Name MaxWall MaxJobs MaxTRES MaxTRESPU Priority ---------- ----------- ------- ------------------------------ ------------- ---------- scavenger 2-00:00:00 cpu=64,gres/gpu=8,mem=256G gres/gpu=16 0 medium 2-00:00:00 2 cpu=8,gres/gpu=2,mem=64G 0 high 1-00:00:00 1 cpu=16,gres/gpu=4,mem=128G 0 default 3-00:00:00 4 cpu=4,gres/gpu=1,mem=32G 0 tron gres/gpu=4 0
Currently in our non-preemption partition, you will be restricted to 4 GPUs at once.
To find out what accounts and partitions you have access to, use the
All storage available in Nexus is currently NFS based. We will be introducing some changes for Phase 2 to support high performance GPUDirect Storage (GDS). These storage allocation procedures will be revised and approved by the launch of Phase 2 by a joint UMIACS and CSD faculty committee.
Each user account in UMIACS is allocated 20GB of storage in their home directory
/fs//nfshomes/$username. This file system has snapshots and backups available. The quota is fixed and cannot be increased.
In Phase 2, other standalone compute clusters will fold into partitions in Nexus and you will start to have the same home directory across all systems.
Each user will be allocated a 200GB scratch directory under
/fs/nexus-scratch/$username. If your directory is completely filled, you may request a permanent increase of up to 400GB total. This space does not have snapshots and is not backed up. Please ensure that any data you have under your scratch directory is reproducible.
Each faculty will have 1TB of lab space to be allocated to them when their account is installed. We also can support grouping these individual allocations together into larger center, lab, or research group allocations if desired by the faculty. Please contact firstname.lastname@example.org to inquire.
This lab space does not have snapshots by default (but are available if requested), but is backed up.
Project allocations are available per user for 270 TB days; you can have a 1TB allocation for up to 270 days, a 3TB allocation for 90 days, etc.. A single faculty member can not have more than 20 TB of sponsored account project allocations active at any point.
The minimum allocation length you can request (maximum storage) is 30 days (9TB) and the minimum storage space you can request (maximum length) is 500GB (540 days).
To request an allocation, please send mail to email@example.com with your account sponsor CC'd. Please include the following details:
- Project Name (short)
- Size (1TB, 2TB, etc.)
- Length in days (270 days, 135 days, etc.)
These allocations will be available via
Datasets are hosted in
/fs/nexus-datasets. If you want to request a dataset for for consideration, please email firstname.lastname@example.org. We will have a more formal process to approve datasets by phase 2 of Nexus. Please note that datasets that require accepting a license will need to be reviewed by UMD's Office of Research Administration (ORA) which may require some time to process.