SLURM: Difference between revisions
Line 39: | Line 39: | ||
{| class="wikitable" | {| class="wikitable" | ||
|+User commands | |||
|- | |- | ||
! | ! | ||
Line 67: | Line 67: | ||
{| class="wikitable" | {| class="wikitable" | ||
|+Environment variables | |||
|- | |- | ||
! | ! | ||
Line 87: | Line 87: | ||
{| class="wikitable" | {| class="wikitable" | ||
|+Job specification | |||
|- | |- | ||
! | ! |
Revision as of 18:39, 29 June 2017
Simple Linux Utility for Resource Management (SLURM)
SLURM is an open-source workload manager designed for Linux clusters of all sizes. It provides three key functions. First, it allocates exclusive or non-exclusive access to resources (computer nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (typically a parallel job) on a set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending work.
Documentation
Commands
Below are some of the common commands used in Slurm. Further information on how to use these commands is found in the documentation linked above. To see all flags available for a command, please check the command's manual by using man $COMMAND
on the command line.
srun
srun runs a parallel job on a cluster managed by Slurm. If necessary, it will first create a resource allocation in which to run the parallel job.
salloc
salloc allocates a Slurm job allocation, which is a set of resources (nodes), possibly with some set of constraints (e.g. number of processors per node). When salloc successfully obtains the requested allocation, it then runs the command specified by the user. Finally, when the user specified command is complete, salloc relinquishes the job allocation. If no command is specified, salloc runs the user's default shell.
sbatch
sbatch submits a batch script to Slurm. The batch script may be given to sbatch through a file name on the command line, or if no file name is specified, sbatch will read in a script from standard input. The batch script may contain options preceded with "#SBATCH" before any executable commands in the script.
squeue
squeue views job and job step information for jobs managed by Slurm.
scancel
scancel signals or cancels jobs, job arrays, or job steps. An arbitrary number of jobs or job steps may be signaled using job specification filters or a space separated list of specific job and/or job step IDs.
sacct
sacct displays job accounting data stored in the job accounting log file or Slurm database in a variety of forms for your analysis. The sacct command displays information on jobs, job steps, status, and exitcodes by default. You can tailor the output with the use of the --format= option to specify the fields to be shown.
sstat
sstat displays job status information for your analysis. The sstat command displays information pertaining to CPU, Task, Node, Resident Set Size (RSS) and Virtual Memory (VM). You can tailor the output with the use of the --fields= option to specify the fields to be shown.
Modules
If you are trying to use GNU Modules in a Slurm job, please read the section of our Modules documentation on non-interactive shell sessions.
Quick Guide to translate PBS/Torque to SLURM
PBS/Torque | SLURM | |
---|---|---|
Job submission | qsub [filename] | sbatch [filename] |
Job deletion | qdel [job_id] | scancel [job_id] |
Job status (by job) | qstat [job_id] | squeue --job [job_id] |
Full job status (by job) | qstat -f [job_id] | scontrol show job [job_id] |
Job status (by user) | qstat -u [username] | squeue --user=[username] |
PBS/Torque | SLURM | |
---|---|---|
Job ID | $PBS_JOBID | $SLURM_JOBID |
Submit Directory | $PBS_O_WORKDIR | $SLURM_SUBMIT_DIR |
Node List | $PBS_NODEFILE | $SLURM_JOB_NODELIST |
PBS/Torque | SLURM | |
---|---|---|
Script directive | #PBS | #SBATCH |
Job Name | -N [name] | --job-name=[name] OR -J [name] |
Node Count | -l nodes=[count] | --nodes=[min[-max]] OR -N [min[-max]] |
CPU Count | -l ppn=[count] | --ntasks-per-node=[count] |
CPUs Per Task | --cpus-per-task=[count] | |
Memory Size | -l mem=[MB] | --mem=[MB] OR --mem-per-cpu=[MB] |
Wall Clock Limit | -l walltime=[hh:mm:ss] | --time=[min] OR --time=[days-hh:mm:ss] |
Node Properties | -l nodes=4:ppn=8:[property] | --constraint=[list] |
Standard Output File | -o [file_name] | --output=[file_name] OR -o [file_name] |
Standard Error File | -e [file_name] | --error=[file_name] OR -e [file_name] |
Combine stdout/stderr | -j oe (both to stdout) | (Default if you don't specify --error) |
Job Arrays | -t [array_spec] | --array=[array_spec] OR -a [array_spec] |
Delay Job Start | -a [time] | --begin=[time] |