debcr
DeBCR for microscopy Denoising/Deblurring/Deconvolution
Science Score: 67.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 2 DOI reference(s) in README -
✓Academic publication links
Links to: nature.com, zenodo.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (15.8%) to scientific vocabulary
Repository
DeBCR for microscopy Denoising/Deblurring/Deconvolution
Basic Info
- Host: GitHub
- Owner: leeroyhannover
- License: mit
- Language: Jupyter Notebook
- Default Branch: main
- Size: 69.4 MB
Statistics
- Stars: 3
- Watchers: 1
- Forks: 1
- Open Issues: 0
- Releases: 3
Metadata Files
README.md
DeBCR
Denoising, deblurring, and optical deconvolution using a physics-informed neural network for light microscopy
DeBCR is a physics-informed deep learning model for light microscopy image restorations (deblurring, denoising, and deconvolution).
DeBCR is an open-source project and is licensed under MIT license.
For the installation/usage questions please write to the Issue Tracker.
Contents
- Quick start - resources to get started with DeBCR
- About DeBCR - key points of the network structure and results examples
- Local usage - instructions on local install, training and prediction
- Example datasets - publicly deposited example datasets used for DeBCR benchmarks/tutorials
Quick start
We prepared multiple resources to help you get started with DeBCR (in order of complexity and flexibility):
1. CodeOcean capsule (link will become available soon) - a ready-to-run environment with the provided data and trained model to get a first impression of the DeBCR results for various image restoration tasks.
2. Google Colab Notebook(s) - the interactive notebook(s) with accessible GPU resources available online, see the table below (to be extended).
| Notebooks | Description |
| :------------------------------------------------------------------------------------------------------ | ----------- |
| DeBCR_train | Demostrates DeBCR training data and parameters setup and training process. The example data is available. |
Open-source code(this GitHub repository) with guidelines on its Local usage for training and prediction.
About DeBCR
DeBCR is implemented based on the original Beylkin-Coifman-Rokhlin (BCR) model, implemented within DNN structure:

In contrast to the traditional single-stage residual BCR learning process, DeBCR integrates feature maps from multiple resolution levels:

The example of the DeBCR performance on the low/high exposure confocal data (Tribolium castaneum from CARE) is shown below:

For more details on implementaion and benchmarks please see our preprint:
Li R., Yushkevich A., Chu X., Kudryashev M., Yakimovich A. Denoising, Deblurring, and optical Deconvolution for cryo-ET and light microscopy with a physics-informed deep neural network DeBCR. in submission, 2024.
Local usage
To use DeBCR locally you need to have:
- a GPU-empowered machine with at least 16GB VRAM;
- CUDA, currently CUDA-11.5 or CUDA-11.7 (we are working on DeBCR environments for other CUDA versions);
- git to be able to clone this repository;
- python package manager for environment, e.g. (micro)mamba (mamba.readthedocs.io) or conda-forge (conda-forge.org).
Otherwise it is also possible to train DeBCR via provided Google Colab Notebook (for a link see Quick start)
Local installation
Installation steps are:
1. Download the source code.
* clone repository to desired location via
bash
git clone https://github.com/leeroyhannover/DeBCR.git
* go to the DeBCR folder (needed for further steps)
bash
cd /path/to/DeBCR
2. Prepare environment with CUDA.
* create and activate package environment
bash
micromamba env create -n debcr-env -c conda-forge python=3.9 pip
micromamba activate debcr-env
* install CUDA dependencies - please use the following setup for CUDA-11.5/7
bash
micromamba install cudatoolkit=11.7 cudnn=8.4
pip install -r requirements_cuda11.txt
3. Install DeBCR dependencies.
bash
pip install -r requirements.txt
For GPU devices recognition (CUDA-11.5/7 setup above) during local DeBCR usage, please make the following export:
bash
export LD_LIBRARY_PATH=/path/to/micromamba/envs/debcr-env/lib/python3.9/site-packages/nvidia/cudnn/lib/:${LD_LIBRARY_PATH}
with the actual location and name of your DeBCR environment.
We also recommend to check that TensorFlow library, needed for our model, recognizes GPUs: ```bash python
import tenforflow as tf tf.config.listphysicaldevices('GPU')
for a single GPU you should see something like:[PhysicalDevice(name='/physicaldevice:GPU:0', devicetype='GPU')] ```
Howeevr, if the output list is empty, please check that you have:
* available and visible GPU
* installed and sourced CUDA Driver and CUDA Tollkit for CUDA-11.5/7
* installed CUDA dependencies for python (see instructions above)
* activated correct python package environemnt for DeBCR
* exported to LD_LIBRARY_PATH correct path to cudnn libraries from your DeBCR environment (see instructions above)
Local training
For the local training we provide an example Jupyter Notebook train_local.ipynb, which is located in the parent directory of the repository. This notebook guides you through the training process using provided examples of already pre-processed data, which are publicly available on Zenodo (for a link see Example datasets). Currently the notebook covers the following example task/dataset: - LM: 2D denoising (files: LM2DCAREX.npz) - low/high exposure confocal dataset of Schmidtea mediterranea (`DenoisingPlanaria` dataset) from the publication of CARE network applied to fluorescent microscopy data (Weigert, Schmidt, Boothe et al., Nature Methods, 2018).
The same data is used to train DeBCR in additionally provided Colab Notebook (for a link see Quick start). The preprocessing procedures from raw LM microscopy data will become available in the future.
To get started with the notebook, you need to additionally install Jupyter Notebook or Jupyter Lab in your DeBCR environment, open it and, if needed, switch kearnel to your DeBCR environment debcr-env.
Further please follow the instructions in the train_local.ipynb notebook for the training.
Local prediction
For the local prediction (test), you need to activate previously installed debcr-env environment in the command line:
bash
conda activate debcr-env
The prediction can be runned on the pre-processed (patched and normalized) data input in NumPy array (.npz) format. The provided data (see Example datasets) is availale to test DeBCR on all 4 tasks: 2D and 3D denoising, super-resolution deconvolution from widefield and confocal data.
The data should be currently organized as the following example for LM2DCARE dataset used in our tutorials and benchmarks:
data
└── 2D_denoising
├── test
│ └── LM_2D_CARE_test.npz
├── train
│ └── LM_2D_CARE_train.npz
└── val
└── LM_2D_CARE_val.npz
The up-to-date usage instructions can be obtained by
bash
python /path/to/DeBCR/tester_DeBCR.py --help
and are provided as well below: ```bash usage: tester_DeBCR.py [-h] [--tasktype {2Ddenoising,3Ddenoising,brightSR,confocal_SR}] [--weightspath WEIGHTSPATH] [--ckptname CKPTNAME] [--testsetpath TESTSETPATH] [--save_fig] [--figpath FIGPATH] [--resultspath RESULTSPATH] [--wholepredict] [--gpuid GPU_ID]
DeBCR: DL-based denoising, deconvolution and deblurring for light microscopy data.
optional arguments: -h, --help show this help message and exit --tasktype {2Ddenoising,3Ddenoising,brightSR,confocalSR} Task type to perform according to data nature. (default: 2Ddenoising) --weightspath WEIGHTSPATH Path to the folder containing weights (checkpoints) of the trained DeBCR model. (default: ./weights/TASKTYPE/) --ckptname CKPTNAME Filename (w/o file extension) of the checkpoint of choice (can be a wildcard as well). If not provided, the latest (by sorted filename) checkpoint file will be used. (default: ckpt-*) --testsetpath TESTSETPATH Path to the test datset as a single NPZ file. (default: None) --savefig Flag to enable saving figures of the example test results. (default: False) --figpath FIGPATH Path to save figures of the example test results. (default: ./figures/) --resultspath RESULTSPATH Path to save the test results. (default: ./results/) --wholepredict Flag to enable predicting the whole image for certain tasks. (default: False) --gpuid GPU_ID GPU ID to be used. (default: 0) ```
Example datasets
To evaluate DeBCR on various image restoration tasks, several previously published datasets were assembled, pre-processed and publicly deposited as NumPy (.npz) arrays in three essential sets (train, validation and test). The datasets aim at multiple image restoration tasks such as denoising and super-resolution deconvolution.
Access data and its details on Zenodo: 10.5281/zenodo.12626121.
Owner
- Name: RoyLeeLee
- Login: leeroyhannover
- Kind: user
- Location: Dresden
- Company: hzdr
- Website: https://www.casus.science/
- Repositories: 2
- Profile: https://github.com/leeroyhannover
Researcher in HZDR/ 3D microscopist/ machine learning for infection and disease
Citation (CITATION.CFF)
cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
- family-names: Li
given-names: Rui
orcid: https://orcid.org/0000-0002-3085-5267
- family-names: Yushkevich
given-names: Artsemi
orcid: https://orcid.org/0000-0002-8729-9281
- family-names: Chu
given-names: Xiaofeng
orcid: https://orcid.org/0000-0001-6801-3949
- family-names: Kudryashev
given-names: Mikhail
orcid: https://orcid.org/0000-0003-3550-6274
- family-names: Yakimovich
given-names: Artur
orcid: https://orcid.org/0000-0003-2458-4904
title: "Denoising, Deblurring, and optical Deconvolution for cryo-ET and light microscopy with a physics-informed deep neural network DeBCR"
version: 0.1
identifiers:
- type: doi
value: 10.5281/zenodo.12636434
date-released: 2024-07-03
GitHub Events
Total
- Push event: 13
- Fork event: 4
- Create event: 1
Last Year
- Push event: 13
- Fork event: 4
- Create event: 1