Nexus: Difference between revisions

From UMIACS
Jump to navigation Jump to search
No edit summary
Line 41: Line 41:
* scavenger - Limited to 64 cores, 256GB RAM, and 8 GPUs per job.  The maximum wall time per job is 2 days.  Only 16 GPUs are permitted simultaneously.  This QoS is both only available in the scavenger partition and the only QoS available in the scavenger partition. To use this QoS, include <code>--partition=scavenger</code> and <code>--account=scavenger</code> in your submission arguments. Do not include any QoS argument other than <code>--qos=scavenger</code> or the submission will fail.
* scavenger - Limited to 64 cores, 256GB RAM, and 8 GPUs per job.  The maximum wall time per job is 2 days.  Only 16 GPUs are permitted simultaneously.  This QoS is both only available in the scavenger partition and the only QoS available in the scavenger partition. To use this QoS, include <code>--partition=scavenger</code> and <code>--account=scavenger</code> in your submission arguments. Do not include any QoS argument other than <code>--qos=scavenger</code> or the submission will fail.


You can display these QoSes from the command line using <code>show_qos</code> command.
You can display these QoSes from the command line using <code>show_qos</code> command. Other lab-or-group-specific QoSes or reserved QoSes may also appear in the listing. The above four QoSes are the ones that everyone can submit to.
 
<pre>
<pre>
# show_qos
# show_qos

Revision as of 16:13, 7 June 2022

The Nexus is the combined scheduler of resources in UMIACS. Many of our existing computational clusters that have discrete schedulers will be folding into this scheduler in the future. The resource manager for Nexus (as with our other existing computational clusters) is SLURM. Resources are arranged into partitions where users are able to schedule computational jobs. Users are arranged into a number of SLURM accounts based on faculty, lab, or center investments.

Getting Started

All accounts in UMIACS are sponsored. If you don't already have a UMIACS account, please see Nexus/Accounts for information on getting one.

Access

The submission nodes for the Nexus computational resources are determined by department, center, or lab affiliation. You can log into the UMIACS Directory CR application and select the Computational Resource (CR) in the list that has the prefix nexus. The Hosts section lists your available login nodes.

Note - UMIACS requires multi-factor authentication through our Duo instance. This is completely discrete from both UMD's and CSD's Duo instances. You will need to enroll one or more devices to access resources in UMIACS, and will be prompted to enroll when you log into the Directory application for the first time.

Once you have identified your submission nodes, you can SSH directly into them. From there, you are able to submit to the cluster via our SLURM workload manager. You need to make sure that your submitted jobs have the correct account, partition, and qos.

Jobs

SLURM jobs are submitted by either srun or sbatch depending if you are doing an interactive job or batch job, respectively. You need to provide the where/how/who to run the job and specify the resources you need to run with.

For the where/how/who, you may be required to specify --partition, --qos, and/or --account (respectively) to be able to adequately submit jobs to the Nexus.

For resources, you may need to specify --time for time, --tasks for CPUs, --mem for RAM, and --gres=gpu for GPUs in your submission arguments to meet your requirements. There are defaults for all four, so if you don't specify something, you may be scheduled with a very minimal set of time and resources (e.g., by default, NO GPUs are included if you do not specify --gres=gpu). For more information about submission flags for GPU resources, see SLURM/JobSubmission#Requesting_GPUs. You can also can run man srun on your submission node for a complete list of available submission arguments.

Interactive

Once logged into a submission node, you can run simple interactive jobs. If your session is interrupted from the submission node, the job will be killed. As such, we encourage use of a terminal multiplexer such as Tmux.

$ srun --pty --ntasks 4 --mem=2gb --gres=gpu:1 nvidia-smi -L
GPU 0: NVIDIA RTX A4000 (UUID: GPU-ae5dc1f5-c266-5b9f-58d5-7976e62b3ca1)

Batch

Batch jobs are scheduled with a script file with an optional ability to embed job scheduling parameters via variables that are defined by #SBATCH lines at the top of the file. You can find some examples in our SLURM/JobSubmission documentation.

Partitions

The SLURM resource manager uses partitions to act as job queues which can restrict size, time and user limits. The Nexus (when fully operational) will have a number of different partitions of resources. Different Centers, Labs, and Faculty will be able to invest in computational resources that will be restricted to approved users through these partitions.

  • Nexus/Tron - This is the pool of resources available to all UMIACS and CSD faculty and graduate students. It provides access for undergraduate and graduate teaching resources.
  • Scavenger - This is a preemption partition that supports nodes from multiple other partitions. More resources are available to schedule simultaneously than in other partitions, however jobs are subject to preemption rules. You are responsible for ensuring your jobs handle this preemption correctly, as the SLURM scheduler will simply restart each job with the same submission arguments when preempted jobs are available to run again.

Quality of Service (QoS)

SLURM uses a QoS to provide limits on job sizes to users. Note that you should still try to only allocate the minimum resources for your jobs, as resources that each of your jobs schedules are counted against your FairShare priority in the future.

  • default - Default QoS. Limited to 4 cores, 32GB RAM, and 1 GPU per job. The maximum wall time per job is 3 days. 4 jobs are permitted simultaneously.
  • medium - Limited to 8 cores, 64GB RAM, and 2 GPUs per job . The maximum wall time per job is 2 days. 2 jobs are permitted simultaneously.
  • high - Limited to 16 cores, 128GB RAM, and 4 GPUs per job. The maximum wall time per job is 1 day. Only 1 job is permitted simultaneously.
  • scavenger - Limited to 64 cores, 256GB RAM, and 8 GPUs per job. The maximum wall time per job is 2 days. Only 16 GPUs are permitted simultaneously. This QoS is both only available in the scavenger partition and the only QoS available in the scavenger partition. To use this QoS, include --partition=scavenger and --account=scavenger in your submission arguments. Do not include any QoS argument other than --qos=scavenger or the submission will fail.

You can display these QoSes from the command line using show_qos command. Other lab-or-group-specific QoSes or reserved QoSes may also appear in the listing. The above four QoSes are the ones that everyone can submit to.

# show_qos
            Name     MaxWall MaxJobs                        MaxTRES     MaxTRESPU   Priority
---------------- ----------- ------- ------------------------------ ------------- ----------
          normal                                                                           0
       scavenger  2-00:00:00             cpu=64,gres/gpu=8,mem=256G   gres/gpu=16          0
          medium  2-00:00:00       2       cpu=8,gres/gpu=2,mem=64G                        0
            high  1-00:00:00       1     cpu=16,gres/gpu=4,mem=128G                        0
         default  3-00:00:00       4       cpu=4,gres/gpu=1,mem=32G                        0
            tron                                                       gres/gpu=4          0
      gamma-long 10-00:00:00             cpu=32,gres/gpu=8,mem=256G                        0

Currently in our non-preemption partition, you will be restricted to 4 GPUs at once.

To find out what accounts and partitions you have access to, use the show_assoc command.

Storage

All storage available in Nexus is currently NFS based. We will be introducing some changes for Phase 2 to support high performance GPUDirect Storage (GDS). These storage allocation procedures will be revised and approved by the launch of Phase 2 by a joint UMIACS and CSD faculty committee.

Home Directories

Each user account in UMIACS is allocated 20GB of home directory storage in /fs/nfshomes/$username. This file system has snapshots and backups available. The quota is fixed and cannot be increased.

In Phase 2, other standalone compute clusters will fold into partitions in Nexus and you will start to have the same home directory across all systems.

Scratch Directories

Each user is allocated a 200GB network scratch directory under /fs/nexus-scratch/$username. If your network scratch directory is completely filled, you may request a permanent increase of up to 400GB total. This space does not have snapshots and is not backed up. Please ensure that any data you have under your network scratch directory is reproducible.

Each computational node that a user can schedule compute jobs on also has one or more local scratch directories. These are always named /scratch0, /scratch1, etc. These are almost always more performant than any other storage available to the job. However, users must stage their data within the confine of their job and stage the data out before the end of their job.

These local scratch directories have a tmpwatch job which will delete unaccessed data after 90 days, scheduled via maintenance jobs to run once a month at 1am. Please make sure you secure any data you write to these directories at the end of your job.

Faculty Allocations

Each faculty is allocated 1TB of lab space when their account is installed. We also can support grouping these individual allocations together into larger center, lab, or research group allocations if desired by the faculty. Please contact staff to inquire.

This lab space does not have snapshots by default (but are available if requested), but is backed up.

Project Allocations

Project allocations are available per user for 270 TB days; you can have a 1TB allocation for up to 270 days, a 3TB allocation for 90 days, etc.. A single faculty member can not have more than 20 TB of sponsored account project allocations active at any point.

The minimum storage space you can request (maximum length) is 500GB (540 days) and the minimum allocation length you can request (maximum storage) is 30 days (9TB).

To request an allocation, please contact staff with your account sponsor involved in the conversation. Please include the following details:

  • Project Name (short)
  • Description
  • Size (1TB, 2TB, etc.)
  • Length in days (270 days, 135 days, etc.)

These allocations will be available via /fs/nexus-projects/$project_name.

Datasets

We have read-only dataset storage available at /fs/nexus-datasets. If there are datasets that you would like to see curated and available, please see this page.

We will have a more formal process to approve datasets by phase 2 of Nexus.