SLURM/ArrayJobs

From UMIACS
Revision as of 20:30, 28 March 2024 by Mbaney (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Here is an example to get you started using array jobs in SLURM.

Array computation example job

Save this code to a file called test.py.

import time

print('start at ' + time.strftime('%H:%M:%S'))

print('sleep for 10 seconds ...')
time.sleep(10)

print('stop at ' + time.strftime('%H:%M:%S'))

Submission script

Save this to a file called array.sh and you should be able to submit the job as sbatch array.sh.

#!/bin/bash

#####################
# job-array example #
#####################

#SBATCH --job-name=example
#SBATCH --array=1-16        # run 16 jobs at the same time 
#SBATCH --time=0-00:05:00   # run for 5 minutes (d-hh:mm:ss)
#SBATCH --mem-per-cpu=500MB # use 500MB per core

# all bash commands must be after all SBATCH directives

# define and create a unique scratch directory
SCRATCH_DIRECTORY=/scratch0/${USER}/job-array-example/${SLURM_JOBID}
mkdir -p ${SCRATCH_DIRECTORY}
cd ${SCRATCH_DIRECTORY}

cp ${SLURM_SUBMIT_DIR}/test.py ${SCRATCH_DIRECTORY}

# each job will see a different ${SLURM_ARRAY_TASK_ID}
echo "now processing task id:: " ${SLURM_ARRAY_TASK_ID}
python test.py > output_${SLURM_ARRAY_TASK_ID}.txt

# after the job is done we copy our output back to $SLURM_SUBMIT_DIR
cp output_${SLURM_ARRAY_TASK_ID}.txt ${SLURM_SUBMIT_DIR}

# we step out of the scratch directory and remove it
cd ${SLURM_SUBMIT_DIR}
rm -rf ${SCRATCH_DIRECTORY}

# happy end
exit 0