SLURM/Priority

From UMIACS
Revision as of 19:27, 30 January 2023 by Mbaney (talk | contribs) (→‎Fair-share)
Jump to navigation Jump to search

SLURM at UMIACS is configured to prioritize jobs based on a number of factors, termed multifactor priority in SLURM.

These factors include:

  • Age of job i.e. time spent waiting to run in the queue
  • Partition job was submitted to
  • Fair-share of resources
  • "Nice" value that job was submitted with

Age

The longer a job is eligible to run but cannot due to all available resources being taken up increases the job's priority to be scheduled as time goes on. The priority modifier for this factor reaches its limit after 7 days.

Partition

The partition named scavenger on each of our clusters always has a lower priority factor for its jobs than all other partitions on that cluster. As mentioned in other UMIACS cluster-specific documentation, jobs submitted to this partition are also preemptable. These two design choices give the partition its name; jobs submitted to the scavenger partition "scavenge" for available resources on the cluster rather than consume a dedicated chunk of resources and are interrupted by jobs seeking to consume dedicated chunks.

All other partitions on our clusters have the same priority factor.

Fair-share

The more resources your jobs have already consumed within an account, the lower priority factor your future jobs will have when compared to other users' jobs in the same account who have used fewer resources (so as to "fair-share" with other users). Additionally, if there are multiple accounts that can submit to a partition, and the sum of resources of all users' jobs within account A is greater than the sum of resources of all users' jobs within account B, the lower priority factor all future jobs from users in account A will have when compared to all future jobs from users in account B.

You can view the various fair-share statistics with the command sshare -l. It will show your specific FairShare values (always between 0.0 and 1.0) within accounts that you have access to. You can also view other accounts' Level Fairshare (LevelFS).

Account                    User  RawShares  NormShares    RawUsage   NormUsage  EffectvUsage  FairShare    LevelFS                    GrpTRESMins                    TRESRunMins
-------------------- ---------- ---------- ----------- ----------- ----------- ------------- ---------- ---------- ------------------------------ ------------------------------
root                                          0.000000 13357781570                  1.000000                                                      cpu=994689,mem=8706484555,ene+
 cbcb                                    1    0.111111    26568079    0.001990      0.001990             55.826073                                cpu=581,mem=76242397,energy=0+
 class                                   1    0.111111    71647791    0.005367      0.005367             20.701148                                cpu=0,mem=0,energy=0,node=0,b+
 clip                                    1    0.111111   985905301    0.073844      0.073844              1.504667                                cpu=13533,mem=63760930,energy+
 gamma                                   1    0.111111   819825375    0.061416      0.061416              1.809155                                cpu=250117,mem=1128084138,ene+
 mc2                                     1    0.111111          11    0.000000      0.000000            1.2606e+08                                cpu=0,mem=0,energy=0,node=0,b+
 nexus                                   1    0.111111  2632111243    0.197035      0.197035              0.563914                                cpu=170772,mem=2035642767,ene+
  nexus                username          1    0.000829         308    0.000000      0.000000   0.470629 7.0587e+03                                cpu=0,mem=0,energy=0,node=0,b+
 scavenger                               1    0.111111  8821718910    0.660346      0.660346              0.168262                                cpu=559683,mem=5402754321,ene+
  scavenger            username          1    0.000829           0    0.000000      0.000000   0.419187        inf                                cpu=0,mem=0,energy=0,node=0,b+
 staff                                   1    0.111111           0    0.000000      0.000000                   inf                                cpu=0,mem=0,energy=0,node=0,b+

The actual resource weightings for the three main resources (memory per GB, CPU cores, and GPUs if applicable) are per-partition and can be viewed in the TRESBillingWeights line in the output of scontrol show partition. The billing value for a job is the sum of all resource weightings for resources the job has requested. This value is then multiplied by the amount of time a job has run in seconds to get the amount it contributes to the RawUsage for the association within the account it is running under.

There are two main algorithms we use for resource weightings, per cluster:

Modern

This weighting algorithm is soon to be in use on the following clusters:

  • CML (after 2/23/2023)
  • Nexus (after 2/23/2023)

Resource have algorithmically computed floating point billing values.

GPU-capable partitions

Each resource (memory/CPU/GPU) is given a weighting value such that their relative billings to each other are equal (33.33% each). The values are then rounded to whole numbers. Memory is typically always the most abundant resource by unit (weighting value of 1.0) and the CPU/GPU values are adjusted accordingly.

Different GPU types may also be weighted differently within the GPU relative billing. A baseline GPU type is first chosen for each cluster. All GPUs of that type and other types that have lower FP32 performance (in TFLOPS, rounded to one decimal place) are given a weighting factor of 1.0. GPU types with higher FP32 performance than the baseline GPU are given a weighting factor calculated by dividing their FP32 performance by the baseline GPU's performance, rounded to two decimal places (i.e. as a percentage). The weighting values for each GPU type are then determined by normalizing the sum of all of GPU cards of different types multiplied by their weighting factors against the relative billing percentage. The values are then rounded to whole numbers.

The current baseline GPUs per cluster are:

CPU-only partitions

Each resource (memory/CPU) is first given a weighting value such that their relative billings to each other are equal (50% each). The values are then rounded to whole numbers. Memory is typically always the most abundant resource by unit (weighting value of 1.0) and the CPU value is adjusted accordingly. The final CPU weight value is then divided by 10, which ends up translating to roughly 90.9% of the billing weight being for memory and 9.1% being for CPU. This is done so as to not affect accounts' fair-share priority factors as much when running CPU-only jobs given the popularity of GPU computing.

Legacy

This weighting algorithm is currently in use on all clusters not mentioned in the previous section. These clusters will eventually either fold into Nexus or have the modern algorithm introduced in the future.

Resources have fixed floating point billing values.

GPU-capable partitions

Memory is billed at 0.125 per GB, CPU is billed at 1.0 per core, and GPU is billed at 4.0 per card.

CPU-only partitions

Memory is billed at 0.125 per GB and CPU is billed at 0.1 per core. The lower CPU weighting is done so as to not affect accounts' fair-share priority factors as much when running CPU-only jobs given the popularity of GPU computing.

Nice value

This is a submission argument that you as the user can include when submitting your jobs to deprioritize them. Larger values will deprioritize jobs e.g.,

srun --pty --qos=default --mem 1gb --time=01:00:00 --nice=2 bash

will have lower priority than

srun --pty --qos=default --mem 1gb --time=01:00:00 --nice=1 bash

which will have lower priority than

srun --pty --qos=default --mem 1gb --time=01:00:00 bash

assuming all three jobs were submitted at the same time. You cannot use negative values for this argument.