In this section, we provide you a set of examples which will be useful to you to configure scripts depending on your research needs. If you are interested in taking a look at them, they are all available in /soft/slurm_templates directory, which is only accessible in calculation nodes. In order to run the scripts, copy them to your home folder.
cocana@node001:~$ ls -l /soft/slurm_templates/
total 8
-rw-r--r-- 1 root root 579 Apr 5 12:06 01-single_core_job.sh
-rw-r--r-- 1 root root 691 Apr 5 12:15 02-multicore-intel_job.sh
-rw-r--r-- 1 root root 664 Apr 5 12:19 03-multiprocessor-intel_job.sh
-rw-r--r-- 1 root root 1097 Apr 5 12:25 04-mpi_job.sh
-rw-r--r-- 1 root root 918 Apr 5 12:42 05-hybrid_openmp_mpi_job.sh
-rw-r--r-- 1 root root 519 Apr 5 13:16 06-array_job.sh
-rw-r--r-- 1 root root 774 Apr 5 13:42 07-gpu_job.sh
drwxr-xr-x 3 root root 14 Apr 5 13:22 bin
drwxr-xr-x 2 root root 0 Apr 5 13:45 output_examples
|
Depending on resources consumption, the simulation could need more than one processor. In the bash script, it is possible to specify how many cores are requested to run the job. If the simulation uses more than one core, you have to add --cpu-per-task or -c parameter with the number of CPU cores used per task. Without this option, the controller will allocate one core. Remember to specify the tasks number with --ntasks or -n parameter.
Single core |
Multi-core |
#SBATCH --ntasks=X where X => 1 |
#SBATCH --ntasks=X where X => 1 #SBATCH --cpus-per-task=Y where Y > 1 |
In some cases, you will maybe need to run your job either on an AMD or Intel architecture. This feature can be selected by --constraint or -C parameter.
For more information, see 01-single_core_job.sh and 02-multicore-intel.job.sh files.
It is possible to find different types of parallelism: OpenMP, OpenMPI or a hybrid solution combining both of them. On one hand, you will use OpenMP for parallelism within a multi-core node. Since it is a multithreading implementation, an OMP_NUM_THREADS variable has to be defined. On the other hand, if the parallelism is between nodes, you will use OpenMPI. So, in our sbatch script, it will be necessary to specify the number of nodes, the number tasks on each node and each CPU and finally, with --distribution=cyclic:cyclic parameter, the tasks are distributed in a round-robin fashion. It is essential to load the OpenMPI module.
OpenMP |
OpenMPI | Hybrid |
#SBATCH --cpus-per-tasks=X where X > 1 export OMP_NUM_THREADS=X where X > 1 |
#SBATCH --ntasks=X where X => 1 #SBATCH --cpus-per-task=Y where Y > 1 #SBATCH --nodes=Z where Z =>2 #SBATCH --ntasks-per-node=W where W > 1 #SBATCH --ntasks-per-socket=U where U > 1 #SBATCH --distribution=cyclic:cyclic module load OpenMPI/4.1.2-GCC-10.2.0 |
Combine both options |
For more information, see 03-multiprocessor-intel_job.sh, 04-mpi_job.sh and hybrid_openmp_mpi_job.sh files.
Using an array, you are able to execute multiple jobs with the same parameters. In your sbatch script, you have to specify the --array option.
Array |
#SBATCH --array=1-X where X > 1 |
For more information, see 06-array_job.sh file.
It is essential to use --gres parameter to reserve a GPU resource and load the CUDA module.
GPU |
#SBATCH --gres=gpu:1 module load CUDA/11.4.3 |
For more information, see 07-gpu_job.sh file.
Here, there is the official web page with all available sbatch parameters: https://slurm.schedmd.com/sbatch.html