mcmlgpu

This repository contains the base code for Monte Carlo simulations in a GPU of light transport on turbid media in GPU.

https://github.com/imsy-dkfz/mcmlgpu

Science Score: 44.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (15.3%) to scientific vocabulary

Keywords

gpu montecarlo-simulation parallel-computing
Last synced: 4 months ago · JSON representation ·

Repository

This repository contains the base code for Monte Carlo simulations in a GPU of light transport on turbid media in GPU.

Basic Info
  • Host: GitHub
  • Owner: IMSY-DKFZ
  • License: gpl-3.0
  • Language: C++
  • Default Branch: develop
  • Homepage:
  • Size: 3.17 MB
Statistics
  • Stars: 4
  • Watchers: 3
  • Forks: 0
  • Open Issues: 2
  • Releases: 1
Topics
gpu montecarlo-simulation parallel-computing
Created over 1 year ago · Last pushed 6 months ago
Metadata Files
Readme Changelog License Code of conduct Citation

README.md

Tests Static Badge Language Static Badge Static Badge Static Badge Static Badge Static Badge Static Badge

Logo

Monte Carlo Multi Layer accelerated by GPU

This repository contains the base code for Monte Carlo simulations in a GPU of light transport on turbid media in GPU. Custom implementations have been added to the original code developed by Erik Alerstam, David Han, and William C. Y. Lo.

This project adds the following features:

  1. Modern build and install rules with CMake.
  2. Custom targeting of compute capabilities for modern GPUs through the flag -DCUDA_ARCH.
  3. Reduces IO operations thus increasing speed.
  4. New docker image.
  5. Easy install and uninstall mechanisms.
  6. Conan packaging enabled.
  7. Adds computation of penetration depth for each simulation at runtime.
  8. Modern code styling using pre-commit hooks.
  9. Progress bar display for simulations.
  10. Reduced terminal clutter.
  11. Eliminates dependency to deprecated cutil library.

asciicast

Setup development environment

All you need to have is a CUDA capable computer and cmake. You can set up these dependencies by running the following commands from a terminal:

bash sudo apt update sudo apt install cmake

To develop a new feature you should create a new issue in gitlab. And start to work in the feature by creating a new branch <issue-number>-task_short_name. Once you have pushed your changes into the branch, you can start a merge request in GitLab.

Build

You will first need to install some dependencies. You need to make sure that you have a CUDA capable computer. You can easily check this by doing nvcc --version from a terminal. If there is no error in that command, then you can proceed to install the following dependencies:

bash sudo apt install cmake git

After installing the dependencies, you can build MCML as follows. You should make sure to indicate the correct GPU compute capabilities for your GPU. You can list the supported ones by doing: nvcc --list-gpu-code.

lang=bash mkdir build cd build cmake .. -DCUDA_ARCH=86 -DCMAKE_INSTALL_PREFIX=/usr make MCML -j

To install or uninstall the application on the system, you can run the following. You will need sudo permission if the path indicated in the previous step "CMAKEINSTALLPREFIX" is privileged. bash make install make uninstall

This application can also be packaged using conan. To do so, you should do the following from the root directory of the repository.

bash conan create . issi/stable-cuda11.5-sm86 -o cuda_arch=86

Running an example

After building MCML, you can run an example to be sure that it is working as intended:

bash MCML -i resources/sample.mci -O batch.mco

This should create a file called batch.mco with the following content:

text ID,Specular,Diffuse,Absorbed,Transmittance,Penetration MCML_Bat_NA_0_Sim_0_3.00e-07.mco,0.02404,0.0277725,0.948185,0,0.998 MCML_Bat_NA_0_Sim_0_3.02e-07.mco,0.02404,0.0299027,0.946057,0,0.998

Contributing a feature/bug fix

If you have doubts on how to finish your feature branch, you can always ask for help

  1. Create an issue on GitHub, if a task does not exist yet.
  2. Assign the task to you.
  3. Create a fork of the repository.
  4. Create a new branch. The branch name has to match the following pattern: <issue-number>-<short_description_of_task>
  5. Implement your feature
  6. Update :code:feature branch: :code:git checkout <branch_name> && git merge develop.
  7. Create a merge request for your feature. select develop as the destination branch.
  8. The branch will be reviewed and automatically merged if there are no requested changes.

Docker image

We also have a docker image that you can use for your projects. The image can be built by running the following command in a terminal. Make sure to have Docker or Docker compose installed on your computer. You can append the flag --progress plain to view more details about the progress. Keep in mind that to run this, you will have to have installed the nvidia container toolkit.

bash docker build -t mcml:latest . docker run --gpus all -v $(pwd)/resources:/data mcml:latest -i /data/sample.mci -O /data/batch.mco -A Notice that in the example above, the local folder called resources is being mounted as a volume in the docker image. After this finishes running then you can find the .mco output file in the output file that you indicated. Because the local folder was mounted as a volume, the output file can be found directly in your local folder.

Funding

This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. [101002198]).

ERC DKFZ

Owner

  • Name: IMSY
  • Login: IMSY-DKFZ
  • Kind: organization
  • Location: Heidelberg, Germany

Division of Intelligent Medical Systems

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
- family-names: "Ayala"
  given-names: "Leonardo"
  orcid: "https://orcid.org/0000-0002-3574-2085"
- family-names: "Maier-Hein"
  given-names: "Lena"
  orcid: "https://orcid.org/0000-0003-4910-9368"
title: "mcmlgpu"
version: 0.0.5
url: "https://github.com/IMSY-DKFZ/mcmlgpu"
doi: 10.5281/zenodo.13304909

GitHub Events

Total
  • Issues event: 6
  • Delete event: 2
  • Issue comment event: 5
  • Push event: 7
  • Pull request review comment event: 1
  • Pull request review event: 1
  • Pull request event: 7
  • Create event: 1
Last Year
  • Issues event: 6
  • Delete event: 2
  • Issue comment event: 5
  • Push event: 7
  • Pull request review comment event: 1
  • Pull request review event: 1
  • Pull request event: 7
  • Create event: 1

Issues and Pull Requests

Last synced: 4 months ago

All Time
  • Total issues: 9
  • Total pull requests: 7
  • Average time to close issues: about 2 months
  • Average time to close pull requests: about 1 hour
  • Total issue authors: 2
  • Total pull request authors: 1
  • Average comments per issue: 0.67
  • Average comments per pull request: 0.14
  • Merged pull requests: 7
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 4
  • Pull requests: 4
  • Average time to close issues: 5 months
  • Average time to close pull requests: 8 minutes
  • Issue authors: 2
  • Pull request authors: 1
  • Average comments per issue: 1.25
  • Average comments per pull request: 0.25
  • Merged pull requests: 4
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • leoyala (8)
  • ChenJY-L (1)
Pull Request Authors
  • leoyala (8)
Top Labels
Issue Labels
enhancement (6) bug (3)
Pull Request Labels
bugfix (3) enhancement (1) documentation (1) release (1)