https://github.com/choropent/3dseg
Science Score: 49.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 5 DOI reference(s) in README -
✓Academic publication links
Links to: zenodo.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (17.7%) to scientific vocabulary
Repository
Basic Info
- Host: GitHub
- Owner: choROPeNt
- Language: Python
- Default Branch: main
- Size: 16.6 MB
Statistics
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
- Releases: 0
Metadata Files
README.md
🩻 3Dseg for CT-Data
This repository builds upon the original pytorch-3dunet implementation by Wolny et al. We extended the codebase by adding functionality and integrating additional loss functions tailored for multi-class segmentation of textile reinforcements in low-resolution CT data.
The corresponding paper and preprint can be found here, and the dataset is available on Zenodo as *.h5 and *.nrrd files. Only the *h5 files can be used for training. All other data mentioned in paper or preprint can be requested via mail.

💿 Installation
🔥 PyTorch Compatibility
This repository is built with PyTorch, a Python-based, GPU-accelerated deep learning library. It leverages the CUDA toolkit for efficient computation on NVIDIA GPUs.
⚠️ Note: PyTorch’s Metal backend (for Apple M1/M2 chips) currently only supports up to 4D tensors. This means 5D inputs required for 3D convolutions (shape [batch, channel, depth, height, width]) are not supported on Metal GPU devices. Running on CPU still possible but not reccommended.
We strongly recommend using an NVIDIA GPU and installing the appropriate CUDA drivers for full functionality and performance.
📦 Installation Steps
- Clone the repository and navigate to it in your terminal.
bash git clone https://github.com/choROPeNt/3dseg.git cd 3dsegThen run:
bash
python -m pip install -e .
This should install the 3dseg python package via PIP in the current active virtual enviroment. How to set up a virtual enviroment please refer to virtual enviroment section
🧠 HPC
If you are using the High Performance Computing (HPC) cluster of the TU Dresden, we recommend using one of the GPU clsuters like Alpha (Nvidia A100 SXM 40 GB) or Capella (Nvidia H100). First, allocate some ressources e.g. for alpha
bash
srun -p alpha -N 1 -t 01:00:00 -c 8 --mem=16G --gres=gpu:1 --pty /bin/bash -l
You can use the following module setup (adjust as needed for your cluster’s module system):
bash
ml release/24.04 GCC/12.3.0 OpenMPI/4.1.5 PyTorch-bundle/2.1.2-CUDA-12.1.1
afterwards, create a new virtual enviroment in directory:
bash
python -m venv --system-site-packages .venv
It is important to set the flag --system-site-packages otherwise you dont have access to the prebuild pytorch package (recommended workaround).
Activate the enviroment via:
bash
source .venv/bin/activate
🏋️♂️ Training
Model training is initiated using the train.py script and a corresponding YAML configuration file:
bash
python scripts/train.py --config=<path-to-congig-yml>
The configuration file specifies model architecture, dataset paths, training hyperparameters, logging, and checkpointing options.
Example configurations can be found in the configs folder. Each config file contains inline comments or is self-explanatory with regard to most parameters such as batch size, learning rate, data augmentation, loss functions, and optimizer settings.
During training, checkpoints are saved periodically, and training metrics are logged for visualization (e.g., via TensorBoard or custom loggers).
🤖 Prediction
To run inference using a trained model, use:
bash
python scripts/predict.py --config=<path-to-congig-yml>
This will load the model from the checkpoint defined in the config file and perform prediction on the specified input data.
Please note that the choice of a padding (e.g. mirror) padding is recommended for better prediction on the edges.
The model will output the prediction probabilities after choosen activation function (eg. sigmoid or softmax) for every channel. Please consider memory allocations and space on your hard drive, precition will save a [c,z,y,x] array as float32.
Hyperparameter Optimization with OmniOpt
We employ OmniOpt, a hyperparameter optimization framework developed at TU Dresden, to tune model parameters for improved performance. Integration into this project is currently under development, and future releases will include automated optimization workflows using OmniOpt.
Further information can be found here Documematation OmniOpt or from ScaDS.AI
📊 Descriptor-based Evaluation
Currently the FFT-based 2-Point Correlation in PyTorch is available. For more higher dimensional descriptors we kindly revise to MCRpy from the NEFM at TU Dresden.
The FFT-based 2-Point correlation function is defined as follows:
$$ S_2(\mathbf{r}) = \frac{1}{N} \; \mathcal{F}^{-1} \left( \mathcal{F}(\mathbf{M}) \odot \mathcal{F}^*(\mathbf{M}) \right)$$
where - $x$ is your binary input (microstructure or phase) - $\ast$ is convolution (autocorrelation) - $\mathcal{F}$ and $\mathcal{F}^{-1}$ are FFT and IFFT - $N$ is the total number of elements (for normalization)
Owner
- Name: Christian Düreth
- Login: choROPeNt
- Kind: user
- Location: Dresden
- Company: TU Dresden
- Repositories: 2
- Profile: https://github.com/choROPeNt
research associate | composite | mechanical enginnering | machine learrning | computer vision
GitHub Events
Total
- Push event: 76
- Gollum event: 1
Last Year
- Push event: 76
- Gollum event: 1
Dependencies
- torch *