SLURM: Difference between revisions

From UMIACS
Jump to navigation Jump to search
No edit summary
No edit summary
 
(10 intermediate revisions by 2 users not shown)
Line 1: Line 1:
=Simple Linux Utility for Resource Management (SLURM)=
=Slurm Workload Manager=
SLURM is an open-source workload manager designed for Linux clusters of all sizes. It provides three key functions. First, it allocates exclusive or non-exclusive access to resources (computer nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (typically a parallel job) on a set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending work.
Slurm is an open-source workload manager designed for Linux clusters of all sizes. It provides three key functions. First, it allocates exclusive or non-exclusive access to resources (computer nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (typically a parallel job) on a set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending work.


==Documentation==
==Documentation==
Line 7: Line 7:
:[[SLURM/ClusterStatus | Checking Cluster Status]]
:[[SLURM/ClusterStatus | Checking Cluster Status]]
:[[SLURM/Priority | Understanding Job Priority]]
:[[SLURM/Priority | Understanding Job Priority]]
:[[SLURM/Preemption | Job Preemption Overview]]
:[http://slurm.schedmd.com/documentation.html Official Documentation]
:[http://slurm.schedmd.com/documentation.html Official Documentation]
:[http://slurm.schedmd.com/faq.html FAQ]
:[http://slurm.schedmd.com/faq.html FAQ]
Related documentation:
:[[Compute/DataLocality | Optimizing Storage Performance]]
:[[VS Code#Cluster_Usage | Using VS Code]]


==Commands==
==Commands==
Below are some of the common commands used in SLURM. Further information on how to use these commands is found in the documentation linked above. To see all flags available for a command, please check the command's manual by using <code>man $COMMAND</code> on the command line.
Below are some of the common commands used in Slurm. Further information on how to use these commands is found in the documentation linked above. To see all flags available for a command, please check the command's manual by using <code>man <COMMAND></code> on the command line.


====[https://slurm.schedmd.com/srun.html srun]====
====[https://slurm.schedmd.com/srun.html srun]====
srun runs a parallel job on a cluster managed by SLURM.  If necessary, it will first create a resource allocation in which to run the parallel job.
srun runs a parallel job on a cluster managed by Slurm.  If necessary, it will first create a resource allocation in which to run the parallel job.


====[https://slurm.schedmd.com/salloc.html salloc]====
====[https://slurm.schedmd.com/salloc.html salloc]====
salloc allocates a SLURM job allocation, which is a set of resources (nodes), possibly with some set of constraints (e.g. number of processors per node).  When salloc successfully obtains the requested allocation, it then runs the command specified by the user.  Finally, when the user specified command is complete, salloc relinquishes the job allocation.  If no command is specified, salloc runs the user's default shell.
salloc allocates a Slurm job allocation, which is a set of resources (nodes), possibly with some set of constraints (e.g. number of processors per node).  When salloc successfully obtains the requested allocation, it then runs the command specified by the user.  Finally, when the user specified command is complete, salloc relinquishes the job allocation.  If no command is specified, salloc runs the user's default shell.


====[https://slurm.schedmd.com/sbatch.html sbatch]====
====[https://slurm.schedmd.com/sbatch.html sbatch]====
sbatch submits a batch script to SLURM.  The batch script may be given to sbatch through a file name on the command line, or if no file name is specified, sbatch will read in a script from standard input.  The batch script may contain options preceded with <code>#SBATCH</code> before any executable commands in the script.
sbatch submits a batch script to Slurm.  The batch script may be given to sbatch through a file name on the command line, or if no file name is specified, sbatch will read in a script from standard input.  The batch script may contain options preceded with <code>#SBATCH</code> before any executable commands in the script.


====[https://slurm.schedmd.com/squeue.html squeue]====
====[https://slurm.schedmd.com/squeue.html squeue]====
squeue views job and job step information for jobs managed by SLURM.
squeue views job and job step information for jobs managed by Slurm.


====[https://slurm.schedmd.com/scancel.html scancel]====
====[https://slurm.schedmd.com/scancel.html scancel]====
Line 29: Line 34:


====[https://slurm.schedmd.com/sacct.html sacct]====
====[https://slurm.schedmd.com/sacct.html sacct]====
sacct displays job accounting data stored in the job accounting log file or SLURM database in a variety of forms for your analysis.  The sacct command displays information on jobs, job steps, status, and exitcodes by default.  You can tailor the output with the use of the <code>--format=</code> option to specify the fields to be shown.
sacct displays job accounting data stored in the job accounting log file or Slurm database in a variety of forms for your analysis.  The sacct command displays information on jobs, job steps, status, and exitcodes by default.  You can tailor the output with the use of the <code>--format=</code> option to specify the fields to be shown.


====[https://slurm.schedmd.com/sstat.html sstat]====
====[https://slurm.schedmd.com/sstat.html sstat]====
Line 41: Line 46:
   
   
<h6>Setting up your Python Virtual Environment</h6>
<h6>Setting up your Python Virtual Environment</h6>
[[PythonVirtualEnv | Create a Python virtual environment]] on the compute node you are assigned and [[PythonVirtualEnv#Activating_the_VirtualEnv | activate it]]. Next, install Jupyter using pip by following the steps [https://jupyter.readthedocs.io/en/latest/install/notebook-classic.html#alternative-for-experienced-python-users-installing-jupyter-with-pip here].
[[PythonVirtualEnv | Create a Python virtual environment]] on the compute node you are assigned and [[PythonVirtualEnv#Activating_the_VirtualEnv | activate it]]. Next, install Jupyter using pip by following the steps [https://jupyter.readthedocs.io/en/latest/install/notebook-classic.html#alternative-for-experienced-python-users-installing-jupyter-with-pip here]. You may also use other environment management systems such as [https://docs.conda.io/en/latest/ Conda] if desired.


<h6>Running Jupyter Notebook</h6>
<h6>Running Jupyter Notebook</h6>
After you've set up the Python virtual environment, run the following command on the compute node you are assigned:
After you've set up the Python virtual environment, submit a job, activate the environment within the job, and run the following command on the compute node you are assigned:
<pre>
<pre>
jupyter notebook --no-browser --port=8889 --ip=0.0.0.0
jupyter notebook --no-browser --port=8889 --ip=0.0.0.0
</pre>
</pre>


This will start running the notebook on port 8889. <b>Note:</b> You must keep this shell window open to be able to connect. If the submission node for the cluster you are using is not accessible via the public internet, you also must be on a machine connected to the UMIACS network or connected to our [[Network/VPN | VPN]] in order to access the Jupyter notebook once you start the SSH tunnel, so ensure this is the case before starting the tunnel. Then, on your local machine, run
This will start running the notebook on port 8889. <b>Note:</b> You must keep this shell window open to be able to connect. If the submission node for the cluster you are using is not accessible via the public internet, you must also be on a machine connected to the UMIACS network or connected to our [[Network/VPN | VPN]] in order to access the Jupyter notebook once you start the SSH tunnel, so ensure this is the case before starting the tunnel. Then, on your local machine, run
<pre>
<pre>
ssh -N -f -L localhost:8888:$(NODENAME):8889 $(USERNAME)@$(SUBMISSIONNODE).umiacs.umd.edu
ssh -N -f -L localhost:8888:<NODENAME>:8889 <USERNAME>@<SUBMISSIONNODE>.umiacs.umd.edu
</pre>
</pre>


This will tunnel port 8889 from the compute node to port 8888 on your local machine, using $(SUBMISSIONNODE) as an intermediate node. Make sure to replace $(USERNAME) with your username, $(SUBMISSIONNODE) with the name of the submission node you want to use, and $(NODENAME) with the name of the compute node you are assigned. For example, assuming your username is <code>username</code> and that you are using the [[Nexus]] cluster, have been [[Nexus#Access | assigned]] the nexusgroup submission nodes, and are assigned compute node tron00.umiacs.umd.edu:
This will tunnel port 8889 from the compute node to port 8888 on your local machine, using <SUBMISSIONNODE> as an intermediate node. Make sure to replace <USERNAME> with your username, <SUBMISSIONNODE> with the name of the submission node you want to use, and <NODENAME> with the name of the compute node you are assigned. Note that this command will not display any output if the connection is successful due to the included ssh flags. You must also keep this shell window open to be able to connect.
 
For example, assuming your username is <code>username</code> and that you are using the [[Nexus]] cluster, have been [[Nexus#Access | assigned]] the nexusgroup submission nodes, and are assigned compute node tron00.umiacs.umd.edu:
<pre>
<pre>
ssh -N -f -L localhost:8888:tron00.umiacs.umd.edu:8889 username@nexusgroup00.umiacs.umd.edu
ssh -N -f -L localhost:8888:tron00.umiacs.umd.edu:8889 username@nexusgroup.umiacs.umd.edu
</pre>
</pre>


You can then open a web browser and type in <code>localhost:8888</code> to access the notebook.
You can then open a web browser and type in <code>localhost:8888</code> to access the notebook.
Notes:
* Later versions of Jupyter have token authentication enabled by default - you will need to prepend the <code>/?token=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX</code> part of the URL provided by the terminal output after starting the notebook in order to connect if this is the case. e.g. <code>localhost:8888/?token=fcc6bd0f996e7aa89376c33cb34f7b80890502aacc97d98e</code>
* Later versions of Jupyter have token authentication enabled by default - you will need to prepend the <code>/?token=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX</code> part of the URL provided by the terminal output after starting the notebook in order to connect if this is the case. e.g. <code>localhost:8888/?token=fcc6bd0f996e7aa89376c33cb34f7b80890502aacc97d98e</code>
* If the port on the compute node mentioned in the example above (8889) is not working, it may be that someone else has already started a notebook using that specific port number on that specific compute node. The port number can be replaced with any other [https://en.wikipedia.org/wiki/Ephemeral_port ephemeral port] number you'd like, just make sure to change it in both the command you run on the compute node and the ssh command from your local machine.
* If the port on the compute node mentioned in the example above (8889) is not working, it may be that someone else has already started a process (Jupyter notebook or otherwise) using that specific port number on that specific compute node. The port number can be replaced with any other [https://en.wikipedia.org/wiki/Ephemeral_port ephemeral port] number you'd like, just make sure to change it in both the command you run on the compute node and the ssh command from your local machine.


=Quick Guide to translate PBS/Torque to SLURM=
=Quick Guide to translate PBS/TORQUE to Slurm=
[https://en.wikipedia.org/wiki/TORQUE PBS/TORQUE] was the previous workload manager and job submission framework used at UMIACS prior to Slurm's adoption. Below is a quick guide of how to translate some common PBS/TORQUE commands to Slurm ones.


{| class="wikitable"
{| class="wikitable"
Line 69: Line 79:
|-
|-
!
!
!PBS/Torque
!PBS/TORQUE
!SLURM
!Slurm
|-
|-
!Job submission
!Job submission
Line 81: Line 91:
|-
|-
!Job status (by job)
!Job status (by job)
|qstat [job_id]  
|qstat [job_id]
|squeue --job [job_id]
|squeue --job [job_id]
|-
|-
Line 97: Line 107:
|-
|-
!
!
!PBS/Torque 
!PBS/TORQUE
!SLURM
!Slurm
|-
|-
!Job ID
!Job ID
Line 108: Line 118:
|$SLURM_SUBMIT_DIR
|$SLURM_SUBMIT_DIR
|-
|-
!Node List  
!Node List
|$PBS_NODEFILE
|$PBS_NODEFILE
|$SLURM_JOB_NODELIST
|$SLURM_JOB_NODELIST
Line 117: Line 127:
|-
|-
!
!
!PBS/Torque 
!PBS/TORQUE
!SLURM
!Slurm
|-
|-
!Script directive
!Script directive
Line 124: Line 134:
|#SBATCH
|#SBATCH
|-
|-
!Job Name  
!Job Name
| -N [name]
| -N [name]
| --job-name=[name] OR -J [name]
| --job-name=[name] OR -J [name]
Line 141: Line 151:
|-
|-
!Memory Size
!Memory Size
| -l mem=[MB]  
| -l mem=[MB]
| --mem=[MB] OR --mem-per-cpu=[MB]
| --mem=[MB] OR --mem-per-cpu=[MB]
|-
|-

Latest revision as of 21:25, 23 October 2024

Slurm Workload Manager

Slurm is an open-source workload manager designed for Linux clusters of all sizes. It provides three key functions. First, it allocates exclusive or non-exclusive access to resources (computer nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (typically a parallel job) on a set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending work.

Documentation

Submitting Jobs
Checking Job Status
Checking Cluster Status
Understanding Job Priority
Job Preemption Overview
Official Documentation
FAQ

Related documentation:

Optimizing Storage Performance
Using VS Code

Commands

Below are some of the common commands used in Slurm. Further information on how to use these commands is found in the documentation linked above. To see all flags available for a command, please check the command's manual by using man <COMMAND> on the command line.

srun

srun runs a parallel job on a cluster managed by Slurm. If necessary, it will first create a resource allocation in which to run the parallel job.

salloc

salloc allocates a Slurm job allocation, which is a set of resources (nodes), possibly with some set of constraints (e.g. number of processors per node). When salloc successfully obtains the requested allocation, it then runs the command specified by the user. Finally, when the user specified command is complete, salloc relinquishes the job allocation. If no command is specified, salloc runs the user's default shell.

sbatch

sbatch submits a batch script to Slurm. The batch script may be given to sbatch through a file name on the command line, or if no file name is specified, sbatch will read in a script from standard input. The batch script may contain options preceded with #SBATCH before any executable commands in the script.

squeue

squeue views job and job step information for jobs managed by Slurm.

scancel

scancel signals or cancels jobs, job arrays, or job steps. An arbitrary number of jobs or job steps may be signaled using job specification filters or a space separated list of specific job and/or job step IDs.

sacct

sacct displays job accounting data stored in the job accounting log file or Slurm database in a variety of forms for your analysis. The sacct command displays information on jobs, job steps, status, and exitcodes by default. You can tailor the output with the use of the --format= option to specify the fields to be shown.

sstat

sstat displays job status information for your analysis. The sstat command displays information pertaining to CPU, Task, Node, Resident Set Size (RSS) and Virtual Memory (VM). You can tailor the output with the use of the --fields= option to specify the fields to be shown.

Modules

If you are trying to use GNU Modules in a Slurm job, please read the section of our Modules documentation on non-interactive shell sessions. This also needs to be done if the OS version of the compute node you are scheduled on is different from the OS version of the submission node you are submitting the job from.

Running Jupyter Notebook on a Compute Node

The steps to run a Jupyter Notebook from a compute node are listed below.

Setting up your Python Virtual Environment

Create a Python virtual environment on the compute node you are assigned and activate it. Next, install Jupyter using pip by following the steps here. You may also use other environment management systems such as Conda if desired.

Running Jupyter Notebook

After you've set up the Python virtual environment, submit a job, activate the environment within the job, and run the following command on the compute node you are assigned:

jupyter notebook --no-browser --port=8889 --ip=0.0.0.0

This will start running the notebook on port 8889. Note: You must keep this shell window open to be able to connect. If the submission node for the cluster you are using is not accessible via the public internet, you must also be on a machine connected to the UMIACS network or connected to our VPN in order to access the Jupyter notebook once you start the SSH tunnel, so ensure this is the case before starting the tunnel. Then, on your local machine, run

ssh -N -f -L localhost:8888:<NODENAME>:8889 <USERNAME>@<SUBMISSIONNODE>.umiacs.umd.edu

This will tunnel port 8889 from the compute node to port 8888 on your local machine, using <SUBMISSIONNODE> as an intermediate node. Make sure to replace <USERNAME> with your username, <SUBMISSIONNODE> with the name of the submission node you want to use, and <NODENAME> with the name of the compute node you are assigned. Note that this command will not display any output if the connection is successful due to the included ssh flags. You must also keep this shell window open to be able to connect.

For example, assuming your username is username and that you are using the Nexus cluster, have been assigned the nexusgroup submission nodes, and are assigned compute node tron00.umiacs.umd.edu:

ssh -N -f -L localhost:8888:tron00.umiacs.umd.edu:8889 username@nexusgroup.umiacs.umd.edu

You can then open a web browser and type in localhost:8888 to access the notebook.

Notes:

  • Later versions of Jupyter have token authentication enabled by default - you will need to prepend the /?token=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX part of the URL provided by the terminal output after starting the notebook in order to connect if this is the case. e.g. localhost:8888/?token=fcc6bd0f996e7aa89376c33cb34f7b80890502aacc97d98e
  • If the port on the compute node mentioned in the example above (8889) is not working, it may be that someone else has already started a process (Jupyter notebook or otherwise) using that specific port number on that specific compute node. The port number can be replaced with any other ephemeral port number you'd like, just make sure to change it in both the command you run on the compute node and the ssh command from your local machine.

Quick Guide to translate PBS/TORQUE to Slurm

PBS/TORQUE was the previous workload manager and job submission framework used at UMIACS prior to Slurm's adoption. Below is a quick guide of how to translate some common PBS/TORQUE commands to Slurm ones.

User commands
PBS/TORQUE Slurm
Job submission qsub [filename] sbatch [filename]
Job deletion qdel [job_id] scancel [job_id]
Job status (by job) qstat [job_id] squeue --job [job_id]
Full job status (by job) qstat -f [job_id] scontrol show job [job_id]
Job status (by user) qstat -u [username] squeue --user=[username]
Environment variables
PBS/TORQUE Slurm
Job ID $PBS_JOBID $SLURM_JOBID
Submit Directory $PBS_O_WORKDIR $SLURM_SUBMIT_DIR
Node List $PBS_NODEFILE $SLURM_JOB_NODELIST
Job specification
PBS/TORQUE Slurm
Script directive #PBS #SBATCH
Job Name -N [name] --job-name=[name] OR -J [name]
Node Count -l nodes=[count] --nodes=[min[-max]] OR -N [min[-max]]
CPU Count -l ppn=[count] --ntasks-per-node=[count]
CPUs Per Task --cpus-per-task=[count]
Memory Size -l mem=[MB] --mem=[MB] OR --mem-per-cpu=[MB]
Wall Clock Limit -l walltime=[hh:mm:ss] --time=[min] OR --time=[days-hh:mm:ss]
Node Properties -l nodes=4:ppn=8:[property] --constraint=[list]
Standard Output File -o [file_name] --output=[file_name] OR -o [file_name]
Standard Error File -e [file_name] --error=[file_name] OR -e [file_name]
Combine stdout/stderr -j oe (both to stdout) (Default if you don't specify --error)
Job Arrays -t [array_spec] --array=[array_spec] OR -a [array_spec]
Delay Job Start -a [time] --begin=[time]