A multi-node/multi GPUs job uses one or more GPUs from different nodes. This configuration is only available on nodes (Intel processors) which are connected to the Infiniband network. Some of the softwares/libraries compatible with this technology are:
Before writing the script, it is essential to highlight that:
In this two examples, you can see how to run and interactive o batch session:
Request a single or multiple node/s with GPU:
# interactive session only in a single node <username>@ohpc:~$ salloc --gres=gpu:1 # interactive session in a multinode on 2 nodes (-N) <username>@ohpc:~$ salloc -N 2 --gres=gpu:1
Then, download the source code to the user home directory:
<username>@node0XX:~$ git clone https://github.com/NVIDIA/nccl-tests.git
After that, you have to load the libraries to compile and install the source code, in that case the dependencies are:
<username>@node0XX:~$ module load foss/2020b <username>@node0XX:~$ module load NCCL/2.12.12-GCCcore-10.2.0-CUDA-11.4.3
now you can run an interactive NCCL test; choose your case single or multinode configuration:
# In a single machine with 1 gpu (-g) <username>@node0XX:~$ cd nccl-tests <username>@node0XX:~$ ./build/all_reduce_perf -b 8 -e 128M -f 2 -g 1 # Multinode example with 1 gpu (-g) in each node, remember you have to use an interactive session with more than 2 nodes <username>@node0XX:~$ cd nccl-tests <username>@node0XX:~$ srun ./build/all_reduce_perf -b 8 -e 128M -f 2 -g 1
In this example you can see a job for multiple nodes with multiple GPUs devices:
Copy this code and save it as a nccl-test.job
#!/bin/bash #SBATCH --partition=high #SBATCH --nodes=2 # number of nodes #SBATCH --ntasks=2 # one per node #SBATCH --mem=5g # Memory per node. 5 GB (In total, 10 GB) #SBATCH --gres=gpu:2 # Number of GPUs per node # Download the source repository [ ! -e "nccl-tests" ] && git clone https://github.com/NVIDIA/nccl-tests.git # Load module libraries module load foss/2020b module load NCCL/2.12.12-GCCcore-10.2.0-CUDA-11.4.3 # Total number of GPU's N*g (nodes*gpu) srun /home/$USER/nccl-tests/build/all_reduce_perf -b 8 -e 128M -f 2 -g 2
Finally, submit the job and verify that task is running:
<username>@node0XX:~$ sbatch nssl-test.job Submitted batch job 373653
<username>@node0XX:~$ squeue -u $USER 373653 high nccl-tests test R 1:01 2 node[020-021] it's running on two nodes with GPU (node020 and node021) and 10 G (5G + 5G) of memory <username>@node0XX:~$ scontrol show job 373653 JobId=373653 JobName=nccl-tests.job UserId=<username>(XXXX) GroupId=<group>_users(XXXX) MCS_label=N/A Priority=6000 Nice=0 Account=info QOS=normal JobState=RUNNING Reason=None Dependency=(null) Requeue=1 Restarts=0 BatchFlag=1 Reboot=0 ExitCode=0:0 RunTime=00:03:57 TimeLimit=UNLIMITED TimeMin=N/A SubmitTime=2023-10-01T09:24:07 EligibleTime=2023-10-01T09:24: 07 StartTime=2023-10-01T09:24:08 EndTime=Unknown Deadline=N/A PreemptTime=None SuspendTime=None SecsPreSuspend=0 Partition=high AllocNode:Sid=node020:1247 ReqNodeList=(null) ExcNodeList=(null) NodeList=node[020-021] BatchHost=node020 NumNodes=2 NumCPUs=2 NumTasks=2 CPUs/Task=1 ReqB:S:C:T=0:0:*:* TRES=cpu=2,mem=10G,node=2, gres/gpu=2 Socks/Node=* NtasksPerN:B:S:C=0:0:*:* CoreSpec=* MinCPUsNode=1 MinMemoryNode=5G MinTmpDiskNode=0 Features=intel DelayBoot=00:00:00 Gres=gpu:1 Reservation=(null) OverSubscribe=OK Contiguous=0 Licenses=(null) Network=(null) Command=/home/<username>/ nccl-tests.job WorkDir=/home/<username> StdErr=/home/<username>/ slurm-373653.out StdIn=/dev/null StdOut=/home/<username>/ slurm-373653.out Power=
Finally, open the output file /home/<username>/slurm-373653.out and check the results:
# nThread 1 nGpus 2 minBytes 8 maxBytes 134217728 step: 2(factor) warmup iters: 5 iters: 20 agg iters: 1 validation: 1 graph: 0 # # Using devices # Rank 0 Group 0 Pid 2100859 on node020 device 0 [0x60] Tesla T4 # Rank 1 Group 0 Pid 2100859 on node020 device 1 [0xdb] Tesla T4 # Rank 2 Group 0 Pid 2253906 on node021 device 0 [0x60] Tesla T4 # Rank 3 Group 0 Pid 2253906 on node021 device 1 [0x61] Tesla T4 # # out-of-place in-place # size count type redop root time algbw busbw #wrong time algbw busbw #wrong # (B) (elements) (us) (GB/s) (GB/s) (us) (GB/s) (GB/s) 8 2 float sum -1 23.93 0.00 0.00 0 23.25 0.00 0.00 0 16 4 float sum -1 23.76 0.00 0.00 0 23.23 0.00 0.00 0 32 8 float sum -1 24.02 0.00 0.00 0 23.46 0.00 0.00 0 64 16 float sum -1 24.05 0.00 0.00 0 23.69 0.00 0.00 0 128 32 float sum -1 25.98 0.00 0.01 0 23.95 0.01 0.01 0 256 64 float sum -1 25.79 0.01 0.01 0 24.47 0.01 0.02 0 512 128 float sum -1 28.65 0.02 0.03 0 27.27 0.02 0.03 0 1024 256 float sum -1 33.88 0.03 0.05 0 33.28 0.03 0.05 0 2048 512 float sum -1 34.98 0.06 0.09 0 34.13 0.06 0.09 0 4096 1024 float sum -1 36.78 0.11 0.17 0 36.56 0.11 0.17 0 8192 2048 float sum -1 41.07 0.20 0.30 0 45.24 0.18 0.27 0 16384 4096 float sum -1 47.54 0.34 0.52 0 52.81 0.31 0.47 0 32768 8192 float sum -1 61.12 0.54 0.80 0 59.71 0.55 0.82 0 65536 16384 float sum -1 71.45 0.92 1.38 0 71.93 0.91 1.37 0 131072 32768 float sum -1 111.4 1.18 1.76 0 110.2 1.19 1.78 0 262144 65536 float sum -1 154.4 1.70 2.55 0 146.9 1.78 2.68 0 524288 131072 float sum -1 218.0 2.40 3.61 0 216.2 2.43 3.64 0 1048576 262144 float sum -1 424.5 2.47 3.71 0 451.5 2.32 3.48 0 2097152 524288 float sum -1 862.7 2.43 3.65 0 902.1 2.32 3.49 0 4194304 1048576 float sum -1 1826.6 2.30 3.44 0 1855.2 2.26 3.39 0 8388608 2097152 float sum -1 4185.9 2.00 3.01 0 4163.7 2.01 3.02 0 16777216 4194304 float sum -1 6059.3 2.77 4.15 0 6126.3 2.74 4.11 0 33554432 8388608 float sum -1 11459 2.93 4.39 0 11430 2.94 4.40 0 67108864 16777216 float sum -1 21784 3.08 4.62 0 21948 3.06 4.59 0 134217728 33554432 float sum -1 43352 3.10 4.64 0 43285 3.10 4.65 0 # Out of bounds values : 0 OK # Avg bus bandwidth : 1.70812 #