SLURM/Priority: Difference between revisions

From UMIACS
Jump to navigation Jump to search
No edit summary
No edit summary
 
(79 intermediate revisions by the same user not shown)
Line 1: Line 1:
[[SLURM]] at UMIACS is configured to prioritize jobs based on a number of factors, termed [https://slurm.schedmd.com/priority_multifactor.html multifactor priority] in SLURM. Each job submitted to the scheduler is assigned a priority, which can be viewed in the output of <code>scontrol show job <jobid></code>:
[[SLURM]] at UMIACS is configured to prioritize jobs based on a number of factors, termed [https://slurm.schedmd.com/priority_multifactor.html multifactor priority] in SLURM. Each job submitted to the scheduler is assigned a priority value, which can be viewed in the output of <code>scontrol show job <jobid></code>.


Example:
<pre>
<pre>
$ scontrol show job 1
$ scontrol show job 1
JobId=1 JobName=bash
JobId=1 JobName=bash
   UserId=username(13337) GroupId=username(13337) MCS_label=N/A
   UserId=username(13337) GroupId=username(13337) MCS_label=N/A
   Priority=10841 Nice=0 Account=nexus QOS=default
   Priority=2000841 Nice=0 Account=nexus QOS=default
...
...
</pre>
</pre>


==Pending Jobs==
==Pending Jobs==
If the partition that you submit your job to cannot instantly begin your job due to no compute node having the resources free to run it, your job will remain in the Pending state with the listed reason <tt>(Resources)</tt>. If there is another job already pending with this reason and you submit a job that gets assigned lower priority, your job will remain in the Pending state with reason <tt>(Priority)</tt>. The scheduler will only begin execution of lower priority job(s) if starting the job(s) would not push the higher priority job(s)' start times further back.
If the partition that you submit your job to cannot start your job instantly due to no compute node(s) in the partition having the resources free to run it, your job will remain in the Pending state with the listed reason <tt>(Resources)</tt>. If there is another job already pending with this reason in a partition, you submit a job to the same partition, and your job gets assigned a lower priority value than that pending job, your job will instead remain in the Pending state with reason <tt>(Priority)</tt>. If there are multiple jobs pending and your job is not the highest priority job pending, the scheduler will only start execution of your job if doing so would not push the start times for any higher priority jobs in the same partition further back.


Lowering some combination of the resources you are requesting and/or the time limit may allow submitted jobs to run more quickly or instantly during times where a partition is under resource pressure. The command <code>squeue -j <jobid> --start></code> can be used to provide a time estimate for when your job will start. We are also actively developing a new command alias to quickly aggregate the free resources across all nodes that meet a given set of submission arguments to provide a better idea of what jobs may begin sooner. We will announce this to the [[Nexus]] user community and mention it on this page when it is deployed on the cluster.
Lowering some combination of the resources you are requesting and/or the time limit may allow submitted jobs to start sooner (or instantly) during times where a partition is under resource pressure. The command <code>squeue -j <jobid> --start</code> can be used to provide a time estimate for when your job will start, where <jobid> is the job ID you receive from either srun or sbatch. This time is subject to change depending on if other users' jobs end sooner or more jobs get submitted.
 
You can use the command alias <code>[[SLURM/JobSubmission#show_available_nodes | show_available_nodes]]</code> with a variety of different submission arguments to get a better idea of what jobs may be able to start sooner, but the output of this command alias is not definitive, for reasons mentioned in the footnotes on the page linked to.


==Priority Factors==
==Priority Factors==
The priority factors in use at UMIACS include:
The priority factors in use at UMIACS are, from most-heavily to least-heavily weighted:
* Age of job i.e. time spent waiting to run in the queue
* Partition job was submitted to
* Partition job was submitted to
* Fair-share of resources
* Fair-share of resources within SLURM account
* Age of job, i.e., time spent waiting to run in the queue
* Association/SLURM account being used
* "Nice" value that job was submitted with
* "Nice" value that job was submitted with


===Age===
===Partition===
The longer a job is eligible to run but cannot due to all available resources being taken up, the higher the job's priority becomes as time goes on. The priority modifier for this factor reaches its limit after 7 days.
The partitions whose names are or are prefixed with <code>scavenger</code> on our clusters are always in a lower priority tier and always have lower priority factors for their jobs than all other partitions on that cluster. As mentioned in other UMIACS cluster-specific documentation, jobs submitted to these partitions are also [https://slurm.schedmd.com/preempt.html preemptable]. These two design choices give the partitions their names; jobs submitted to <code>scavenger</code> named or prefixed partitions "scavenge" for available resources on the cluster rather than consume dedicated resources, and are interrupted by jobs asking to consume dedicated resources.
 
On [[Nexus]], labs/centers may also have their own scavenger partitions, i.e., <code><labname>-scavenger</code>, if the faculty for the lab/center have decided upon some sort of limit on jobs, such as number of simultaneous jobs, number of actively consumed billing resources, etc., in their non-scavenger partitions. These lab/center scavenger partitions allow for more jobs to be run by members of that lab/center on that lab's/center's nodes only, but jobs on these partitions are preemptable by jobs in that lab's/center's non-scavenger partitions and/or account-specific partitions, if any account-specific partitions containing a given node exist. Jobs submitted to lab/center scavenger partitions will preempt jobs submitted to the institute-wide scavenger partitions (running on nodes that are also in those lab/center scavenger partitions).


===Partition===
In decreasing order of priority (highest first), our priority tiers for partitions are:
The partition named <code>scavenger</code> on each of our clusters always has a lower priority factor for its jobs than all other partitions on that cluster. As mentioned in other UMIACS cluster-specific documentation, jobs submitted to this partition are also [https://slurm.schedmd.com/preempt.html preemptable]. These two design choices give the partition its name; jobs submitted to the <code>scavenger</code> partition "scavenge" for available resources on the cluster rather than consume dedicated chunks of resources, and are interrupted by jobs seeking to consume dedicated chunks of resources.
# Priority access account-specific partitions
# Account-specific partitions
# Lab/center-specific and institute-wide non-"scavenger" named partitions
# Lab/center-specific "scavenger" named partitions
# Institute-wide "scavenger" named partitions


On [[Nexus]], labs/centers may also have their own scavenger partitions (<code><labname>-scavenger</code>) if the faculty for the lab/center have decided upon some sort of limit on jobs (number of simultaneous jobs, number of actively consumed billing resources, etc.) in their non-scavenger partitions. These lab/center scavenger partitions allow for more jobs to be run by members of that lab/center on that lab's/center's nodes only, but are preemptable by that lab's/center's non-scavenger partition jobs.
A job in a specific priority tier will never have a higher priority value than any job in a higher priority tier. Corresponding to the above tiers, the priority values that you will see for jobs in each tier:
# >= 4000000
# 3000000 to 3999999
# 2000000 to 2999999
# 1000000 to 1999999
# < 1000000


In decreasing order of priority (highest first), our job priorities for partitions are:
As such, '''jobs on specific nodes in some non-"scavenger" named partitions may also be subject to preemption''' based on these priority tiers. Generally speaking, though, most nodes are only in one partition in one of the first three (non-"scavenger") priority tiers, and then in an institute-wide "scavenger" named partition, and a lab/center-specific "scavenger" named partition, if one exists for the lab/center that a given node is a part of.
# Lab/center non-scavenger partitions
# Lab/center scavenger partitions
# Cluster-wide scavenger partitions


===Fair-share===
===Fair-share===
The more resources your jobs have already consumed within an account, the lower priority factor your future jobs will have when compared to other users' jobs in the same account who have used fewer resources (so as to "fair-share" with other users). Additionally, if there are multiple accounts that can submit to a partition, and the sum of resources of all users' jobs within account A is greater than the sum of resources of all users' jobs within account B, the lower priority factor all future jobs from users in account A will have when compared to all future jobs from users in account B. (In other words, fair-share is hierarchical.)
The more resources your jobs have already consumed within an account, the lower priority factor your future jobs will have when compared to other users' jobs in the same account who have used fewer resources (so as to "fair-share" with other users). Additionally, if there are multiple accounts that can submit to a partition, and the sum of resources used by all users' jobs within account A is greater than the sum of resources used by all users' jobs within account B, all future jobs from users in account A will have a lower priority factor when compared to all future jobs from users in account B. (In other words, fair-share is hierarchical.)


You can view the various fair-share statistics with the command <code>sshare -l</code>. It will show your specific FairShare values (always between 0.0 and 1.0) within accounts that you have access to. You can also view other accounts' Level Fairshare (LevelFS).
You can view the various fair-share statistics with the command <code>sshare -l</code>. It will show your specific FairShare values (always between 0.0 and 1.0) within accounts that you have access to. You can also view other accounts' Level Fairshare (LevelFS).
Line 41: Line 53:
Account                    User  RawShares  NormShares    RawUsage  NormUsage  EffectvUsage  FairShare    LevelFS                    GrpTRESMins                    TRESRunMins
Account                    User  RawShares  NormShares    RawUsage  NormUsage  EffectvUsage  FairShare    LevelFS                    GrpTRESMins                    TRESRunMins
-------------------- ---------- ---------- ----------- ----------- ----------- ------------- ---------- ---------- ------------------------------ ------------------------------
-------------------- ---------- ---------- ----------- ----------- ----------- ------------- ---------- ---------- ------------------------------ ------------------------------
root                                          0.000000 66034847484                 1.000000                                                      cpu=7746109,mem=69754856514,e+
root                                          0.000000 68444174744                 1.000000                                                      cpu=4797787,mem=70530109515,e+
  cbcb                                    1    0.032258 14115111102   0.213757     0.213757             0.150910                               cpu=4969,mem=20355003,energy=+
  cbcb                                    1    0.028571  4454658377   0.065046     0.065046             0.439246                               cpu=452139,mem=22276633804,en+
  class                                  1    0.032258          0   0.000000     0.000000                  inf                               cpu=0,mem=0,energy=0,node=0,b+
  class                                  1    0.028571  255617290   0.003733     0.003733              7.652841                               cpu=7021,mem=74554606,energy=+
  clip                                    1    0.032258 1568122041   0.023733     0.023733             1.359207                               cpu=70083,mem=1464478788,ener+
  clip                                    1    0.028571 3057933838   0.044674     0.044674             0.639549                               cpu=33214,mem=2744443460,ener+
  cml                                    1    0.032258    17338485    0.000263      0.000263            122.854754                                cpu=29958,mem=245415936,energ+
  cml                                    1    0.028571   66866114   0.000975     0.000975             29.299389                               cpu=1796,mem=29426756,energy=+
cml-abhinav                            1    0.032258      784250    0.000012      0.000012            2.7161e+03                                cpu=0,mem=0,energy=0,node=0,b+
  gamma                                  1    0.028571 2609474948   0.038129     0.038129             0.749334                               cpu=34089,mem=360373862,energ+
cml-cameron                            1    0.032258          0    0.000000      0.000000                  inf                                cpu=0,mem=0,energy=0,node=0,b+
  mbrc                                    1    0.028571   73411964   0.001073     0.001073             26.635560                               cpu=1195,mem=4896358,energy=0+
cml-furongh                            1    0.032258  2098793815    0.031784      0.031784              1.014924                                cpu=940758,mem=8995575569,ene+
  mc2                                    1    0.028571    2682557   0.000039     0.000039           728.919551                               cpu=0,mem=0,energy=0,node=0,b+
cml-hajiagha                            1    0.032258          0    0.000000      0.000000                  inf                                cpu=0,mem=0,energy=0,node=0,b+
  nexus                                  1    0.028571 5472794067   0.079964     0.079964             0.357302                               cpu=278464,mem=3250599000,ene+
cml-john                                1    0.032258  258872094    0.003920      0.003920              8.228447                                cpu=476993,mem=5494963200,ene+
   nexus                username          1    0.000835       69666    0.000001      0.000021  0.457407  37.435501                                cpu=0,mem=0,energy=0,node=0,b+
cml-ramani                              1    0.032258          0    0.000000      0.000000                  inf                                cpu=0,mem=0,energy=0,node=0,b+
  oasis                                  1    0.028571      330030   0.000005     0.000005           5.9248e+03                               cpu=0,mem=0,energy=0,node=0,b+
cml-scavenger                          1    0.032258  6734023027    0.101979      0.101979              0.316321                                cpu=1496736,mem=13036434773,e+
  quics                                  1    0.028571           4   0.000000      0.000000            4.1683e+08                               cpu=0,mem=0,energy=0,node=0,b+
cml-sfeizi                              1    0.032258  185510632    0.002809      0.002809            11.482444                                cpu=70732,mem=579442005,energ+
  scavenger                              1    0.028571 40888195964   0.597419     0.597419             0.047825                               cpu=3142204,mem=29902903931,e+
cml-tokekar                            1    0.032258          0    0.000000      0.000000                  inf                                cpu=0,mem=0,energy=0,node=0,b+
  scavenger            username          1    0.000835        171   0.000000      0.000000   0.033975 9.8885e+04                               cpu=0,mem=0,energy=0,node=0,b+
cml-tomg                                1    0.032258   99040108   0.001500     0.001500             21.507603                                cpu=0,mem=0,energy=0,node=0,b+
  vulcan                                 1    0.028571  1247236491   0.018224     0.018224              1.567761                               cpu=147273,mem=1161243818,ene+
cml-zhou                                1    0.031250          0    0.000000      0.000000                  inf                               cpu=0,mem=0,energy=0,node=0,b+
  gamma                                  1    0.032258 8880343229   0.134482     0.134482             0.239869                               cpu=2532358,mem=23460226867,e+
  mbrc                                    1    0.032258   27060567   0.000410     0.000410             78.716582                               cpu=0,mem=0,energy=0,node=0,b+
  mc2                                    1    0.032258        9175   0.000000     0.000000           2.3215e+05                               cpu=0,mem=0,energy=0,node=0,b+
  nexus                                  1    0.032258 3346084300   0.050672     0.050672             0.636599                               cpu=121941,mem=1468973003,ene+
   nexus                username          1    0.000779       69666    0.000001      0.000021  0.457407  37.435501                                cpu=0,mem=0,energy=0,node=0,b+
  scavenger                              1    0.032258 21762190063   0.329562     0.329562              0.097882                                cpu=1085904,mem=4775150199,en+
  scavenger           username          1    0.000779        171    0.000000      0.000000  0.033975 9.8885e+04                               cpu=0,mem=0,energy=0,node=0,b+
  vulcan                                  1    0.032258  1458631376    0.022089      0.022089              1.460352                                cpu=25968,mem=106368204,energ+
vulcan-abhinav                          1    0.032258  4441051354    0.067254      0.067254              0.479648                                cpu=850445,mem=9471827285,ene+
vulcan-djacobs                          1    0.032258  381503730    0.005777      0.005777              5.583472                                cpu=7656,mem=250882730,energy+
vulcan-janus                            1    0.032258           0   0.000000      0.000000                   inf                                cpu=0,mem=0,energy=0,node=0,b+
vulcan-jbhuang                          1    0.032258    15619477    0.000237      0.000237           136.375587                                cpu=0,mem=0,energy=0,node=0,b+
vulcan-lsd                              1    0.032258          0    0.000000      0.000000                  inf                               cpu=0,mem=0,energy=0,node=0,b+
  vulcan-metzler                          1    0.032258  435471075   0.006595     0.006595             4.891520                               cpu=16235,mem=133000942,energ+
vulcan-rama                            1    0.032258          0   0.000000      0.000000                   inf                                cpu=0,mem=0,energy=0,node=0,b+
vulcan-ramani                          1    0.032258          0    0.000000      0.000000                  inf                               cpu=0,mem=0,energy=0,node=0,b+
  vulcan-yaser                            1    0.032258  209285667   0.003166     0.003166            10.189036                                cpu=15366,mem=251762005,energ+
vulcan-zwicker                          1   0.032258          0    0.000000      0.000000                  inf                               cpu=0,mem=0,energy=0,node=0,b+
</pre>
</pre>


The actual resource weightings for the three main resources (memory per GB, CPU cores, and GPUs if applicable) are per-partition and can be viewed in the <code>TRESBillingWeights</code> line in the output of <code>scontrol show partition</code>. The <code>billing</code> value for a job is the sum of all resource weightings for resources the job has requested. This value is then multiplied by the amount of time a job has run in seconds to get the amount it contributes to the RawUsage for the association within the account it is running under.
The actual resource billing weights for the three main resources (memory per GB, CPU cores, and number of GPUs if applicable) are per-partition and can be viewed in the <code>TRESBillingWeights</code> line in the output of <code>scontrol show partition</code>. The <code>billing</code> value for a job is the sum of all resource weightings for resources the job has requested. This value is then multiplied by the amount of time a job has run in seconds to get the amount it contributes to the RawUsage for the association within the account it is running under.
 
====Algorithm====
The algorithm we use for resource weightings differs depending on if there are any GPUs in a partition or not, and is as follows:


There are two main algorithms we use for resource weightings:
=====GPU partitions=====
Each resource (memory/CPU/GPU) is given a weighting value such that their relative billings to each other within the partition are equal (33.33% each). Memory is typically always the most abundant resource by unit (weighting value of 1.0 per GB) and the CPU/GPU values are adjusted accordingly.


====Modern====
Different GPU types may also be weighted differently within the GPU relative billing. A baseline GPU type is first chosen. All GPUs of that type and other types that have lower FP32 performance (in [https://en.wikipedia.org/wiki/FLOPS TFLOPS]) are given a weighting factor of 1.0. GPU types with higher FP32 performance than the baseline GPU are given a weighting factor calculated by dividing their FP32 performance by the baseline GPU's FP32 performance. The weighting values for each GPU type are then determined by normalizing the sum of all of GPU cards' billing values multiplied by their weighting factors against the relative billing percentage for GPUs (33.33%).
This weighting algorithm is in use on the [[Nexus]] cluster. Resources have algorithmically computed floating point billing values, adjusted automatically as new resources are added to the cluster.


=====Modern: GPU-capable partitions=====
The current baseline GPU is the [https://www.nvidia.com/en-us/design-visualization/rtx-a4000/ NVIDIA RTX A4000].
Each resource (memory/CPU/GPU) is given a weighting value such that their relative billings to each other are equal (33.33% each). The values are then rounded to whole numbers. Memory is typically always the most abundant resource by unit (weighting value of 1.0 per GB) and the CPU/GPU values are adjusted accordingly.


Different GPU types may also be weighted differently within the GPU relative billing. A baseline GPU type is first chosen. All GPUs of that type and other types that have lower FP32 performance (in [https://en.wikipedia.org/wiki/FLOPS TFLOPS]) are given a weighting factor of 1.0. GPU types with higher FP32 performance than the baseline GPU are given a weighting factor calculated by dividing their FP32 performance by the baseline GPU's FP32 performance. The weighting values for each GPU type are then determined by normalizing the sum of all of GPU cards of different types multiplied by their weighting factors against the relative billing percentage for GPUs (33.33%).
=====CPU-only partitions=====
Each resource (memory/CPU) is first given a weighting value such that their relative billings to each other within the partition are equal (50% each). Memory is typically always the most abundant resource by unit (weighting value of 1.0 per GB) and the CPU value is adjusted accordingly. The final CPU weight value is then divided by 10, which translates to roughly 90.9% of the billing weight being for memory and 9.1% being for CPU. The division of the CPU value is done so as to not affect accounts' fair-share priority factors as much when running jobs in CPU-only partitions given the popularity of GPGPU computing.


The current baseline GPU is the [https://www.nvidia.com/en-us/design-visualization/rtx-a4000/ NVIDIA RTX A4000].
===Age===
The longer a job is eligible to run but cannot due to resources being unavailable or it having a lower priority value than one or more other jobs, the higher the job's priority becomes as it continues to wait in the queue. This is the only priority modifier that can change a job's priority value once it has been submitted, and the priority modifier for this factor reaches its limit after 7 days.


=====Modern: CPU-only partitions=====
Jobs' age priority factors on our clusters are recalculated every 5 minutes.
Each resource (memory/CPU) is first given a weighting value such that their relative billings to each other are equal (50% each). Memory is typically always the most abundant resource by unit (weighting value of 1.0 per GB) and the CPU value is adjusted accordingly. The final CPU weight value is then divided by 10, which ends up translating to roughly 90.9% of the billing weight being for memory and 9.1% being for CPU. This is done so as to not affect accounts' fair-share priority factors as much when running CPU-only jobs given the popularity of GPU computing.


====Legacy====
===Association===
This weighting algorithm is currently in use on a few remaining standalone clusters for labs. Resources have fixed floating point billing values. Memory is billed at 0.125 per GB, CPU is billed at 1.0 per core, and GPU is billed at 4.0 per card.
Some lab/center-specific SLURM accounts have priority values directly attached to them. Jobs run under these accounts gain this many extra points of priority.


===Nice value===
===Nice value===
This is a submission argument that you as the user can include when submitting your jobs to deprioritize them. Larger values will deprioritize jobs more e.g.,
This is a submission argument that you as the user can include when submitting your jobs to deprioritize them. Larger values will deprioritize jobs more, e.g.,
<pre>srun --pty --nice=2 bash</pre>
<pre>srun --pty --nice=2 bash</pre>
will have lower priority than
will have lower priority than
Line 105: Line 101:
<pre>srun --pty bash</pre>
<pre>srun --pty bash</pre>
assuming all three jobs were submitted at the same time. You cannot use negative values for this argument.
assuming all three jobs were submitted at the same time. You cannot use negative values for this argument.
Because this value is absolute, if you want to use it, we would recommend using small numbers - one or two digits - only. Larger numbers may impact your job's ability to run at all as a result of the other factors at play.

Latest revision as of 16:40, 30 April 2026

SLURM at UMIACS is configured to prioritize jobs based on a number of factors, termed multifactor priority in SLURM. Each job submitted to the scheduler is assigned a priority value, which can be viewed in the output of scontrol show job <jobid>.

Example:

$ scontrol show job 1
JobId=1 JobName=bash
   UserId=username(13337) GroupId=username(13337) MCS_label=N/A
   Priority=2000841 Nice=0 Account=nexus QOS=default
...

Pending Jobs

If the partition that you submit your job to cannot start your job instantly due to no compute node(s) in the partition having the resources free to run it, your job will remain in the Pending state with the listed reason (Resources). If there is another job already pending with this reason in a partition, you submit a job to the same partition, and your job gets assigned a lower priority value than that pending job, your job will instead remain in the Pending state with reason (Priority). If there are multiple jobs pending and your job is not the highest priority job pending, the scheduler will only start execution of your job if doing so would not push the start times for any higher priority jobs in the same partition further back.

Lowering some combination of the resources you are requesting and/or the time limit may allow submitted jobs to start sooner (or instantly) during times where a partition is under resource pressure. The command squeue -j <jobid> --start can be used to provide a time estimate for when your job will start, where <jobid> is the job ID you receive from either srun or sbatch. This time is subject to change depending on if other users' jobs end sooner or more jobs get submitted.

You can use the command alias show_available_nodes with a variety of different submission arguments to get a better idea of what jobs may be able to start sooner, but the output of this command alias is not definitive, for reasons mentioned in the footnotes on the page linked to.

Priority Factors

The priority factors in use at UMIACS are, from most-heavily to least-heavily weighted:

  • Partition job was submitted to
  • Fair-share of resources within SLURM account
  • Age of job, i.e., time spent waiting to run in the queue
  • Association/SLURM account being used
  • "Nice" value that job was submitted with

Partition

The partitions whose names are or are prefixed with scavenger on our clusters are always in a lower priority tier and always have lower priority factors for their jobs than all other partitions on that cluster. As mentioned in other UMIACS cluster-specific documentation, jobs submitted to these partitions are also preemptable. These two design choices give the partitions their names; jobs submitted to scavenger named or prefixed partitions "scavenge" for available resources on the cluster rather than consume dedicated resources, and are interrupted by jobs asking to consume dedicated resources.

On Nexus, labs/centers may also have their own scavenger partitions, i.e., <labname>-scavenger, if the faculty for the lab/center have decided upon some sort of limit on jobs, such as number of simultaneous jobs, number of actively consumed billing resources, etc., in their non-scavenger partitions. These lab/center scavenger partitions allow for more jobs to be run by members of that lab/center on that lab's/center's nodes only, but jobs on these partitions are preemptable by jobs in that lab's/center's non-scavenger partitions and/or account-specific partitions, if any account-specific partitions containing a given node exist. Jobs submitted to lab/center scavenger partitions will preempt jobs submitted to the institute-wide scavenger partitions (running on nodes that are also in those lab/center scavenger partitions).

In decreasing order of priority (highest first), our priority tiers for partitions are:

  1. Priority access account-specific partitions
  2. Account-specific partitions
  3. Lab/center-specific and institute-wide non-"scavenger" named partitions
  4. Lab/center-specific "scavenger" named partitions
  5. Institute-wide "scavenger" named partitions

A job in a specific priority tier will never have a higher priority value than any job in a higher priority tier. Corresponding to the above tiers, the priority values that you will see for jobs in each tier:

  1. >= 4000000
  2. 3000000 to 3999999
  3. 2000000 to 2999999
  4. 1000000 to 1999999
  5. < 1000000

As such, jobs on specific nodes in some non-"scavenger" named partitions may also be subject to preemption based on these priority tiers. Generally speaking, though, most nodes are only in one partition in one of the first three (non-"scavenger") priority tiers, and then in an institute-wide "scavenger" named partition, and a lab/center-specific "scavenger" named partition, if one exists for the lab/center that a given node is a part of.

Fair-share

The more resources your jobs have already consumed within an account, the lower priority factor your future jobs will have when compared to other users' jobs in the same account who have used fewer resources (so as to "fair-share" with other users). Additionally, if there are multiple accounts that can submit to a partition, and the sum of resources used by all users' jobs within account A is greater than the sum of resources used by all users' jobs within account B, all future jobs from users in account A will have a lower priority factor when compared to all future jobs from users in account B. (In other words, fair-share is hierarchical.)

You can view the various fair-share statistics with the command sshare -l. It will show your specific FairShare values (always between 0.0 and 1.0) within accounts that you have access to. You can also view other accounts' Level Fairshare (LevelFS).

Account                    User  RawShares  NormShares    RawUsage   NormUsage  EffectvUsage  FairShare    LevelFS                    GrpTRESMins                    TRESRunMins
-------------------- ---------- ---------- ----------- ----------- ----------- ------------- ---------- ---------- ------------------------------ ------------------------------
root                                          0.000000 68444174744                  1.000000                                                      cpu=4797787,mem=70530109515,e+
 cbcb                                    1    0.028571  4454658377    0.065046      0.065046              0.439246                                cpu=452139,mem=22276633804,en+
 class                                   1    0.028571   255617290    0.003733      0.003733              7.652841                                cpu=7021,mem=74554606,energy=+
 clip                                    1    0.028571  3057933838    0.044674      0.044674              0.639549                                cpu=33214,mem=2744443460,ener+
 cml                                     1    0.028571    66866114    0.000975      0.000975             29.299389                                cpu=1796,mem=29426756,energy=+
 gamma                                   1    0.028571  2609474948    0.038129      0.038129              0.749334                                cpu=34089,mem=360373862,energ+
 mbrc                                    1    0.028571    73411964    0.001073      0.001073             26.635560                                cpu=1195,mem=4896358,energy=0+
 mc2                                     1    0.028571     2682557    0.000039      0.000039            728.919551                                cpu=0,mem=0,energy=0,node=0,b+
 nexus                                   1    0.028571  5472794067    0.079964      0.079964              0.357302                                cpu=278464,mem=3250599000,ene+
  nexus                username          1    0.000835       69666    0.000001      0.000021   0.457407  37.435501                                cpu=0,mem=0,energy=0,node=0,b+
 oasis                                   1    0.028571      330030    0.000005      0.000005            5.9248e+03                                cpu=0,mem=0,energy=0,node=0,b+
 quics                                   1    0.028571           4    0.000000      0.000000            4.1683e+08                                cpu=0,mem=0,energy=0,node=0,b+
 scavenger                               1    0.028571 40888195964    0.597419      0.597419              0.047825                                cpu=3142204,mem=29902903931,e+
  scavenger            username          1    0.000835         171    0.000000      0.000000   0.033975 9.8885e+04                                cpu=0,mem=0,energy=0,node=0,b+
 vulcan                                  1    0.028571  1247236491    0.018224      0.018224              1.567761                                cpu=147273,mem=1161243818,ene+

The actual resource billing weights for the three main resources (memory per GB, CPU cores, and number of GPUs if applicable) are per-partition and can be viewed in the TRESBillingWeights line in the output of scontrol show partition. The billing value for a job is the sum of all resource weightings for resources the job has requested. This value is then multiplied by the amount of time a job has run in seconds to get the amount it contributes to the RawUsage for the association within the account it is running under.

Algorithm

The algorithm we use for resource weightings differs depending on if there are any GPUs in a partition or not, and is as follows:

GPU partitions

Each resource (memory/CPU/GPU) is given a weighting value such that their relative billings to each other within the partition are equal (33.33% each). Memory is typically always the most abundant resource by unit (weighting value of 1.0 per GB) and the CPU/GPU values are adjusted accordingly.

Different GPU types may also be weighted differently within the GPU relative billing. A baseline GPU type is first chosen. All GPUs of that type and other types that have lower FP32 performance (in TFLOPS) are given a weighting factor of 1.0. GPU types with higher FP32 performance than the baseline GPU are given a weighting factor calculated by dividing their FP32 performance by the baseline GPU's FP32 performance. The weighting values for each GPU type are then determined by normalizing the sum of all of GPU cards' billing values multiplied by their weighting factors against the relative billing percentage for GPUs (33.33%).

The current baseline GPU is the NVIDIA RTX A4000.

CPU-only partitions

Each resource (memory/CPU) is first given a weighting value such that their relative billings to each other within the partition are equal (50% each). Memory is typically always the most abundant resource by unit (weighting value of 1.0 per GB) and the CPU value is adjusted accordingly. The final CPU weight value is then divided by 10, which translates to roughly 90.9% of the billing weight being for memory and 9.1% being for CPU. The division of the CPU value is done so as to not affect accounts' fair-share priority factors as much when running jobs in CPU-only partitions given the popularity of GPGPU computing.

Age

The longer a job is eligible to run but cannot due to resources being unavailable or it having a lower priority value than one or more other jobs, the higher the job's priority becomes as it continues to wait in the queue. This is the only priority modifier that can change a job's priority value once it has been submitted, and the priority modifier for this factor reaches its limit after 7 days.

Jobs' age priority factors on our clusters are recalculated every 5 minutes.

Association

Some lab/center-specific SLURM accounts have priority values directly attached to them. Jobs run under these accounts gain this many extra points of priority.

Nice value

This is a submission argument that you as the user can include when submitting your jobs to deprioritize them. Larger values will deprioritize jobs more, e.g.,

srun --pty --nice=2 bash

will have lower priority than

srun --pty --nice=1 bash

which will have lower priority than

srun --pty bash

assuming all three jobs were submitted at the same time. You cannot use negative values for this argument.

Because this value is absolute, if you want to use it, we would recommend using small numbers - one or two digits - only. Larger numbers may impact your job's ability to run at all as a result of the other factors at play.