Batch Example: Single Job
This first example shows the execution of a simple process (a single Step, implicit, single Task, implicit).
Job Description:
Encoding a video file. The encoding will be "multi-threaded" in using "ffmpeg" and the "-threads" option.
Batch content (job.sh):
# SBATCH options:
#SBATCH --job-name=Encode-Simple # Job Name
#SBATCH --cpus-per-task=4 # Allocation of 4 threads per Task
#SBATCH --mail-type=END # Email notification of the
#SBATCH --mail-user=firstname.lastname@aniti.fr # end of job execution.
#SBATCH --partition=CPU-Nodes
# Treatment
module purge # remove all loaded modules from the environment
module load ffmpeg/0.6.5 # load ffmpeg module version 0.6.5
ffmpeg -i video.mp4 -threads $SLURM_CPUS_PER_TASK [...] video.mkv
Remarks :
- Multithreading being activated on the servers of the 'CPU-Nodes' partition, the --cpus-per-task parameter corresponds to the number of threads to allocate per task, and must be a multiple of two.
- It is not mandatory to specify the memory needed per CPU
in the batch. By default, each Job automatically has a
RAM allocation which is variable depending on the partition used:
- RAM allocation of 5120 MB per requested thread (i.e. 5 GB) for nodes calculation of the CPU-Nodes partition (4 threads will give the right to 5120*4 = 20480 MB ≈ 20 GB)
- RAM allocation of 10240 MB per requested CPU (i.e. 10 GB) for the nodes calculation of the GPU-Nodes partition (4 CPUs will give right to 10240*4 = 40960 MB ≈ 40 GB)
- The SBATCH option "ntasks" is not necessary here because 1 is the default value.
- The selection of the partition to use is done with the option
SBATCH "partition":
#SBATCH --partition=CPU-Nodes
- The "SLURM_CPUS_PER_TASK" environment variable contains the value of the SBATCH "cpus-per-task" option and is passed to ffmpeg in such a way as to always use as many threads for encoding as many CPUs available for the Job. Other variables Slurm environments are available! (listing complete)
Environment variable Value (Corresponding #SBATCH option)
--------------------- ---------------------------------------------------------------------------------------------
SLURM_JOB_PARTITION Partition used (--partition)
SLURM_JOB_NAME Job name (Warning, unlike the output/error options, the %j%N... variables are not replaced!)
SLURM_NTASKS Number of tasks (--ntasks)
SLURM_CPUS_PER_TASK Number of CPUs per task (--cpus-per-task)
SLURM_JOB_NUM_NODES Number of nodes requested/deducted (--nodes)
SLURM_JOB_NODELIST List of used nodes (--nodelist)
Batch execution:
The Batch is transmitted to Slurm via the "sbatch" command which, except error or refusal, creates a Job and places it in the queue.
[firstname.lastname@cr-login-1 ~]# sbatch job.sh`