SLURM: Difference between revisions

From UMIACS
Jump to navigation Jump to search
No edit summary
No edit summary
(30 intermediate revisions by 5 users not shown)
Line 1: Line 1:
'''Simple Linux Utility for Resource Management'''
=Simple Linux Utility for Resource Management (SLURM)=
SLURM is an open-source workload manager designed for Linux clusters of all sizes. It provides three key functions. First, it allocates exclusive or non-exclusive access to resources (computer nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (typically a parallel job) on a set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending work.


UMIACS is transitioning away from our Torque/Maui batch resource manager to Slurm. Slurm is now in use broadly with the regional and national super computing communities.
==Documentation==
:[[SLURM/JobSubmission | Submitting Jobs]]
:[[SLURM/JobStatus | Checking Job Status]]
:[[SLURM/ClusterStatus | Checking Cluster Status]]
:[http://slurm.schedmd.com/documentation.html Official Documentation]
:[http://slurm.schedmd.com/faq.html FAQ]


Terminology and command line changes are the biggest differences when coming from Torque/Maui to Slurm.
==Commands==
Below are some of the common commands used in Slurm. Further information on how to use these commands is found in the documentation linked above. To see all flags available for a command, please check the command's manual by using <code>man $COMMAND</code> on the command line.


* Torque queues are now called partitions in Slurm
====srun====
srun runs a parallel job on a cluster managed by Slurm.  If necessary, it will first create a resource allocation in which to run the parallel job.


=Commands=
====salloc====
salloc allocates a Slurm job allocation, which is a set of resources (nodes), possibly with some set of constraints (e.g. number of processors per node).  When salloc successfully obtains the requested allocation, it then runs the command specified by the user.  Finally, when the user specified command is complete, salloc relinquishes the job allocation.  If no command is specified, salloc runs the user's default shell.


==sinfo==
====sbatch====
sbatch submits a batch script to Slurm.  The batch script may be given to sbatch through a file name on the command line, or if no file name is specified, sbatch will read in a script from standard input.  The batch script may contain options preceded with "#SBATCH" before any executable commands in the script.


To view partitions and nodes you can use the ```sinfo``` command.
====squeue====
squeue views job and job step information for jobs managed by Slurm.


<pre>
====scancel====
# sinfo
scancel signals or cancels jobs, job arrays, or job steps.  An arbitrary number of jobs or job steps may be signaled using job specification filters or a space separated list of specific job and/or job step IDs.
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
 
debug*      up      30:00    2  down* adev[1-2]
====sacct====
debug*      up      30:00    3  idle adev[3-5]
sacct displays job accounting data stored in the job accounting log file or Slurm database in a variety of forms for your analysis. The sacct command displays information on jobs, job steps, status, and exitcodes by default. You can tailor the output with the use of the --format= option to specify the fields to be shown.
batch        up      30:00    3 down* adev[6,13,15]
 
batch        up      30:00    3 alloc adev[7-8,14]
====sstat====
batch        up      30:00    4  idle adev[9-12]
sstat displays job status information for your analysis. The sstat command displays information pertaining to CPU, Task, Node, Resident Set Size (RSS) and Virtual Memory (VM). You can tailor the output with the use of the --fields= option to specify the fields to be shown.
</pre>


==squeue==
==Modules==
If you are trying to use [[Modules | GNU Modules]] in a Slurm job, please read the section of our [[Modules]] documentation on [[Modules#Modules_in_Non-Interactive_Shell_Sessions | non-interactive shell sessions]].


To show jobs in partitions the ```squeue``` command is used. This will by default will show all jobs in all partitions. You can restrict the
==Running Jupyter Notebook on a Compute Node==
The steps to run a Jupyter Notebook from a compute node are listed below.
<h6>Setting up Python Virtual Environment</h6>
In order to set up your python virtual environment, you'll first want to follow the steps listed [https://wiki.umiacs.umd.edu/umiacs/index.php/PythonVirtualEnv here] to create a Python virtual environment on the compute node you are assigned. Then, activate it using the steps listed [https://wiki.umiacs.umd.edu/umiacs/index.php/PythonVirtualEnv#Activating_the_VirtualEnv here]. Next, install Jupyter using pip by following the steps [https://jupyter.readthedocs.io/en/latest/install/notebook-classic.html#alternative-for-experienced-python-users-installing-jupyter-with-pip here].


<h6> Running Jupyter Notebook </h6>
After you've set up the python virtual environment, run the following commands on the compute node you are assigned:
<pre>
<pre>
# squeue
jupyter notebook --no-browser --port=8889 --ip=0.0.0.0
JOBID PARTITION  NAME  USER ST  TIME NODES NODELIST(REASON)
65646    batch  chem  mike  R 24:19    2 adev[7-8]
65647    batch  bio  joan  R  0:09    1 adev14
65648    batch  math  phil PD  0:00    6 (Resources)
</pre>
</pre>
This will start running the notebook on port 8889. <b>Note:</b> You must keep this shell window open to be able to connect.
Then, on your local machine, run
<pre>
ssh -N -f -L localhost:8888:$(NODENAME):8889 $(USERNAME)@$(SUBMISSIONNODE).umiacs.umd.edu
</pre>
This will tunnel port 8889 from the compute node to port 8888 on your local machine, using the $(SUBMISSIONNODE) as an intermediate node. Make sure to replace $(NODENAME) with the name of the compute node you are assigned, $(USERNAME) with your username, and $(SUBMISSIONNODE) with the name of the submission node you want to use. For example, <code>username@opensub02.umiacs.umd.edu</code>. You can then open a web browser and type in <code>localhost:8888</code> to access the notebook. <b>Note:</b> You must be on a machine connected to the UMIACS network or connected to our [[Network/VPN | VPN]] in order to access the Jupyter notebook.
* If the port on the compute node mentioned in the example above (8889) is not working, it may be that someone else has already started a notebook using that port. The port can be replaced with any other port you'd like, just make sure to change it in both the command you run on the compute node and the ssh command from your local machine.


==srun==
=Quick Guide to translate PBS/Torque to SLURM=
 
To run a simple command like hostname over 4 nodes: '''srun -n4 -l hostname'''
 
==scancel==
 
 
==scontrol==


You can receive more thorough information on both nodes and partitions through the '''scontrol''' command.
{| class="wikitable"
|+User commands
|-
!
!PBS/Torque
!SLURM
|-
!Job submission
|qsub [filename]
|sbatch [filename]
|-
!Job deletion
|qdel [job_id]
|scancel [job_id]
|-
!Job status (by job)
|qstat [job_id]
|squeue --job [job_id]
|-
!Full job status (by job)
|qstat -f [job_id]
|scontrol show job [job_id]
|-
!Job status (by user)
|qstat -u [username]
|squeue --user=[username]
|}


To show more about partitions you can run '''scontrol show partition'''
{| class="wikitable"
<pre>
|+Environment variables
# scontrol show partition
|-
PartitionName=debug TotalNodes=5 TotalCPUs=40 RootOnly=NO
!
  Default=YES Shared=FORCE:4 Priority=1 State=UP
!PBS/Torque 
  MaxTime=00:30:00 Hidden=NO
!SLURM
  MinNodes=1 MaxNodes=26 DisableRootJobs=NO AllowGroups=ALL
|-
  Nodes=adev[1-5] NodeIndices=0-4
!Job ID
|$PBS_JOBID
|$SLURM_JOBID
|-
!Submit Directory
|$PBS_O_WORKDIR
|$SLURM_SUBMIT_DIR
|-
!Node List
|$PBS_NODEFILE
|$SLURM_JOB_NODELIST
|}


PartitionName=batch TotalNodes=10 TotalCPUs=80 RootOnly=NO
{| class="wikitable"
  Default=NO Shared=FORCE:4 Priority=1 State=UP
|+Job specification
  MaxTime=16:00:00 Hidden=NO
|-
  MinNodes=1 MaxNodes=26 DisableRootJobs=NO AllowGroups=ALL
!
  Nodes=adev[6-15] NodeIndices=5-14
!PBS/Torque 
</pre>
!SLURM
 
|-
To show more about nodes you can run '''scontrol show nodes'''
!Script directive
<pre>
|#PBS
</pre>
|#SBATCH
|-
!Job Name
| -N [name]
| --job-name=[name] OR -J [name]
|-
!Node Count
| -l nodes=[count]
| --nodes=[min[-max]] OR -N [min[-max]]
|-
!CPU Count
| -l ppn=[count]
| --ntasks-per-node=[count]
|-
!CPUs Per Task
|
| --cpus-per-task=[count]
|-
!Memory Size
| -l mem=[MB]
| --mem=[MB] OR --mem-per-cpu=[MB]
|-
!Wall Clock Limit
| -l walltime=[hh:mm:ss]
| --time=[min] OR --time=[days-hh:mm:ss]
|-
!Node Properties
| -l nodes=4:ppn=8:[property]
| --constraint=[list]
|-
!Standard Output File
| -o [file_name]
| --output=[file_name] OR -o [file_name]
|-
!Standard Error File
| -e [file_name]
| --error=[file_name] OR -e [file_name]
|-
!Combine stdout/stderr
| -j oe (both to stdout)
|(Default if you don't specify --error)
|-
!Job Arrays
| -t [array_spec]
| --array=[array_spec] OR -a [array_spec]
|-
!Delay Job Start
| -a [time]
| --begin=[time]
|}

Revision as of 15:31, 7 May 2021

Simple Linux Utility for Resource Management (SLURM)

SLURM is an open-source workload manager designed for Linux clusters of all sizes. It provides three key functions. First, it allocates exclusive or non-exclusive access to resources (computer nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (typically a parallel job) on a set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending work.

Documentation

Submitting Jobs
Checking Job Status
Checking Cluster Status
Official Documentation
FAQ

Commands

Below are some of the common commands used in Slurm. Further information on how to use these commands is found in the documentation linked above. To see all flags available for a command, please check the command's manual by using man $COMMAND on the command line.

srun

srun runs a parallel job on a cluster managed by Slurm. If necessary, it will first create a resource allocation in which to run the parallel job.

salloc

salloc allocates a Slurm job allocation, which is a set of resources (nodes), possibly with some set of constraints (e.g. number of processors per node). When salloc successfully obtains the requested allocation, it then runs the command specified by the user. Finally, when the user specified command is complete, salloc relinquishes the job allocation. If no command is specified, salloc runs the user's default shell.

sbatch

sbatch submits a batch script to Slurm. The batch script may be given to sbatch through a file name on the command line, or if no file name is specified, sbatch will read in a script from standard input. The batch script may contain options preceded with "#SBATCH" before any executable commands in the script.

squeue

squeue views job and job step information for jobs managed by Slurm.

scancel

scancel signals or cancels jobs, job arrays, or job steps. An arbitrary number of jobs or job steps may be signaled using job specification filters or a space separated list of specific job and/or job step IDs.

sacct

sacct displays job accounting data stored in the job accounting log file or Slurm database in a variety of forms for your analysis. The sacct command displays information on jobs, job steps, status, and exitcodes by default. You can tailor the output with the use of the --format= option to specify the fields to be shown.

sstat

sstat displays job status information for your analysis. The sstat command displays information pertaining to CPU, Task, Node, Resident Set Size (RSS) and Virtual Memory (VM). You can tailor the output with the use of the --fields= option to specify the fields to be shown.

Modules

If you are trying to use GNU Modules in a Slurm job, please read the section of our Modules documentation on non-interactive shell sessions.

Running Jupyter Notebook on a Compute Node

The steps to run a Jupyter Notebook from a compute node are listed below.

Setting up Python Virtual Environment

In order to set up your python virtual environment, you'll first want to follow the steps listed here to create a Python virtual environment on the compute node you are assigned. Then, activate it using the steps listed here. Next, install Jupyter using pip by following the steps here.

Running Jupyter Notebook

After you've set up the python virtual environment, run the following commands on the compute node you are assigned:

jupyter notebook --no-browser --port=8889 --ip=0.0.0.0

This will start running the notebook on port 8889. Note: You must keep this shell window open to be able to connect. Then, on your local machine, run

ssh -N -f -L localhost:8888:$(NODENAME):8889 $(USERNAME)@$(SUBMISSIONNODE).umiacs.umd.edu

This will tunnel port 8889 from the compute node to port 8888 on your local machine, using the $(SUBMISSIONNODE) as an intermediate node. Make sure to replace $(NODENAME) with the name of the compute node you are assigned, $(USERNAME) with your username, and $(SUBMISSIONNODE) with the name of the submission node you want to use. For example, username@opensub02.umiacs.umd.edu. You can then open a web browser and type in localhost:8888 to access the notebook. Note: You must be on a machine connected to the UMIACS network or connected to our VPN in order to access the Jupyter notebook.

  • If the port on the compute node mentioned in the example above (8889) is not working, it may be that someone else has already started a notebook using that port. The port can be replaced with any other port you'd like, just make sure to change it in both the command you run on the compute node and the ssh command from your local machine.

Quick Guide to translate PBS/Torque to SLURM

User commands
PBS/Torque SLURM
Job submission qsub [filename] sbatch [filename]
Job deletion qdel [job_id] scancel [job_id]
Job status (by job) qstat [job_id] squeue --job [job_id]
Full job status (by job) qstat -f [job_id] scontrol show job [job_id]
Job status (by user) qstat -u [username] squeue --user=[username]
Environment variables
PBS/Torque SLURM
Job ID $PBS_JOBID $SLURM_JOBID
Submit Directory $PBS_O_WORKDIR $SLURM_SUBMIT_DIR
Node List $PBS_NODEFILE $SLURM_JOB_NODELIST
Job specification
PBS/Torque SLURM
Script directive #PBS #SBATCH
Job Name -N [name] --job-name=[name] OR -J [name]
Node Count -l nodes=[count] --nodes=[min[-max]] OR -N [min[-max]]
CPU Count -l ppn=[count] --ntasks-per-node=[count]
CPUs Per Task --cpus-per-task=[count]
Memory Size -l mem=[MB] --mem=[MB] OR --mem-per-cpu=[MB]
Wall Clock Limit -l walltime=[hh:mm:ss] --time=[min] OR --time=[days-hh:mm:ss]
Node Properties -l nodes=4:ppn=8:[property] --constraint=[list]
Standard Output File -o [file_name] --output=[file_name] OR -o [file_name]
Standard Error File -e [file_name] --error=[file_name] OR -e [file_name]
Combine stdout/stderr -j oe (both to stdout) (Default if you don't specify --error)
Job Arrays -t [array_spec] --array=[array_spec] OR -a [array_spec]
Delay Job Start -a [time] --begin=[time]