https://github.com/adamouization/relaxation-technique-parallel-computing

:repeat: Relaxation technique using POSIX threads (shared memory configuration) and MPI (distributed memory configuration).

https://github.com/adamouization/relaxation-technique-parallel-computing

Science Score: 13.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (10.3%) to scientific vocabulary

Keywords

c distributed-memory distributed-systems mpi parallel-programming posix-threads pthreads shared-memory
Last synced: 6 months ago · JSON representation

Repository

:repeat: Relaxation technique using POSIX threads (shared memory configuration) and MPI (distributed memory configuration).

Basic Info
  • Host: GitHub
  • Owner: Adamouization
  • License: mit
  • Language: C
  • Default Branch: master
  • Homepage:
  • Size: 1.68 MB
Statistics
  • Stars: 1
  • Watchers: 3
  • Forks: 1
  • Open Issues: 0
  • Releases: 0
Topics
c distributed-memory distributed-systems mpi parallel-programming posix-threads pthreads shared-memory
Created over 7 years ago · Last pushed about 7 years ago
Metadata Files
Readme License

README.md

Relaxation Technique Parallelised in C using Pthreads and MPI

Problem Description

The objective of this assignment is to use low-level primitive parallelism constructs, first on a shared memory architecture then on a distributed memory architecture, and analyse how parallel problems scale on such an architectures using C, pthreads and MPI on Balena, a mid-sized cluster with 2720 cpu cores.

The background is a method called relaxation technique, a solution to differential equations, which is achieved by having a square array of values and repeatedly replacing a value with the average of its four neighbours, excepting boundaries values which remain fixed. This process is repeated until all values settle down to within a given precision.

Usage

Shared Memory Architecture (pthreads)

  • Compile using the makefile: make, or gcc main.c array_helpers.c print_helpers.c -o shared_relaxation -pthread -Wall -Wextra -Wconversion
  • Run: ./shared_relaxation <number_of_threads>

Distributed Memory Architecture (MPI)

  • Compile using the makefile: make or: mpicc -Wall -Wextra -Wconversion main.c -o distributed_relaxation -lm
  • Run: mpirun -np <num_processes> ./distributed_relaxation -d <dimension> -p <precision> -debug <debug mode>

where: * -np corresponds to the number of processes; * -d corresponds to the dimensions of the square array; * -p corresponds to the precision of the relaxation; * -debug corresponds to the debug mode (0: only essential information, 1: row allocation logs, 2: process IDs logs, 3: initial and nal arrays, 4: iteration debugging data).

Other

Running the shared memory architecture on the Balena cluster using SLURM

  • SSH into Balena: ssh [user_name]@balena.bath.ac.uk
  • cd into the project directory and submit the SLURM job script job.slurm to the queue: sbatch jobscript.slurm
  • Monitor the job in the queue: squeue -u [user_name]
  • View the results in the relaxation.<job_id>.out file using the cat *.out command.

Submitting multiple files using a bash script (shared memory testing)

  • ./submit_multiple_batch 1
  • cat *.out
  • Use following regex to retrieve time by using this regex \=.(\d+)\..(\d+) and pasting the output in regex101

Copying files to/from Balena

  • from BUCS to Balena: cp $BUCSHOME/dos/year\ 4/Relaxation-Technique-Parallel-Computing/src/<file> /home/o/aj645/scratch/cw2-distributed-architecture/

  • from Balena to BUCS: cp /home/o/aj645/scratch/cw2-distributed-architecture/<file> $BUCSHOME/dos/year\ 4/Relaxation-Technique-Parallel-Computing

Reports

  • You can read the shared architecture implementation using pthreads report here.
  • You can read the distributed architecture implementation using MPI report here.

TODO

See TODO.md

License

Contact

  • email: adam@jaamour.com
  • website: www.adam.jaamour.com
  • twitter: @Adamouization

Owner

  • Name: Adam Jaamour
  • Login: Adamouization
  • Kind: user
  • Location: United Kingdom
  • Company: @NewDayTechnology

💻 Data Scientist @NewDayTechnology 🧠 MSc AI @ Uni of St Andrews 📓 BSc Computer Science @ Uni of Bath 💼 Former SWE @ Scuderia Alpha Tauri F1 Team

GitHub Events

Total
Last Year

Issues and Pull Requests

Last synced: 12 months ago

All Time
  • Total issues: 0
  • Total pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Total issue authors: 0
  • Total pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels