Recent Releases of hello-mpi

hello-mpi - A basic "Hello world" example to output text to console from nodes over a network using MPI

A basic "Hello world" example to output text to console from nodes over a network using MPI.

A cluster at IIIT has four SLURM nodes. We want to run one process on each node, and run 32 threads using OpenMP. In future, such a setup would allow us to run distributed algorithms that utilize each node's memory efficiently and minimize communication cost (within the same node). Output is saved in gist. Technical help from Semparithi Aravindan.

Note You can just copy main.sh to your system and run it. \ For the code, refer to main.cu.


```bash $ scl enable gcc-toolset-11 bash $ sbatch main.sh

==========================================

SLURMJOBID = 3373

SLURM_NODELIST = node[01-04]

SLURMJOBGPUS =

==========================================

Cloning into 'hello-mpi'...

[node01.local:2180262] MCW rank 0 is not bound (or bound to all available processors)

[node02.local:3790641] MCW rank 1 is not bound (or bound to all available processors)

[node04.local:3758212] MCW rank 3 is not bound (or bound to all available processors)

[node03.local:3287974] MCW rank 2 is not bound (or bound to all available processors)

P00: NAME=node01.local

P00: OMPNUMTHREADS=32

P02: NAME=node03.local

P02: OMPNUMTHREADS=32

P03: NAME=node04.local

P03: OMPNUMTHREADS=32

P01: NAME=node02.local

P01: OMPNUMTHREADS=32

P00.T00: Hello MPI

P00.T24: Hello MPI

P00.T16: Hello MPI

P00.T26: Hello MPI

P00.T05: Hello MPI

P00.T29: Hello MPI

P00.T22: Hello MPI

P00.T06: Hello MPI

P00.T17: Hello MPI

P00.T23: Hello MPI

P00.T25: Hello MPI

P00.T13: Hello MPI

P00.T01: Hello MPI

P00.T09: Hello MPI

P00.T03: Hello MPI

P00.T02: Hello MPI

P00.T31: Hello MPI

P03.T00: Hello MPI

P03.T24: Hello MPI

P03.T05: Hello MPI

P03.T21: Hello MPI

P03.T04: Hello MPI

...

```



References




ORG

- C++
Published by wolfram77 over 2 years ago