https://github.com/deepskies/noise2self
A framework for blind denoising with self-supervision.
Science Score: 10.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
○codemeta.json file
-
○.zenodo.json file
-
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (10.7%) to scientific vocabulary
Last synced: 5 months ago
·
JSON representation
Repository
A framework for blind denoising with self-supervision.
Basic Info
- Host: GitHub
- Owner: deepskies
- License: mit
- Language: Jupyter Notebook
- Default Branch: main
- Size: 18.6 MB
Statistics
- Stars: 2
- Watchers: 5
- Forks: 2
- Open Issues: 2
- Releases: 0
Fork of czbiohub/noise2self
Created almost 7 years ago
· Last pushed about 3 years ago
https://github.com/deepskies/noise2self/blob/main/
# Noise2Self: Blind Denoising by Self-Supervision This repo demonstrates a framework for blind denoising high-dimensional measurements, as described in the [paper](https://arxiv.org/abs/1901.11365). It can be used to calibrate classical image denoisers and train deep neural nets; the same principle works on matrices of single-cell gene expression.*The result of training a U-Net to denoise a stack of noisy Chinese characters. Note that the only input is the noisy data; no ground truth is necessary.* ## Images The notebook [Intro to Calibration](notebooks/Intro%20to%20Calibration.ipynb) shows how to calibrate any traditional image denoising model, such as median filtering, wavelet thresholding, or non-local means. We use the excellent [scikit-image](www.scikit-image.org) implementations of these methods, and have submitted a PR to incorporate self-supervised calibration directly into the package. (Comments welcome on the [PR](https://github.com/scikit-image/scikit-image/pull/3824)!) The notebook [Intro to Neural Nets](notebooks/Intro%20to%20Neural%20Nets.ipynb) shows how to train a denoising neural net using a self-supervised loss, on the simple example of MNIST digits. The notebook runs in less than a minute, on CPU, on a MacBook Pro. We implement this in [pytorch](www.pytorch.org). Because the self-supervised loss is much easier to implement than the data loading, GPU management, logging, and architecture design required for handling any particular dataset, we recommend that you take any existing pipeline for your data and simply modify the training loop. ### Traditional Supervised Learning ``` for i, batch in enumerate(data_loader): x, y = batch output = model(x) loss = loss_function(output, y) ``` ### Self-Supervised Learning ``` from mask import Masker masker = Masker() for i, batch in enumerate(data_loader): x, _ = batch input, mask = masker.mask(noisy_images, i) output = model(input) loss = loss_function(output*mask, x*mask) ``` Dependencies are in the `environment.yml` file. The remaining notebooks generate figures from the [paper](https://arxiv.org/abs/1901.11365).
Owner
- Name: Deep Skies Lab
- Login: deepskies
- Kind: organization
- Email: deepskieslab@gmail.com
- Website: www.deepskieslab.com
- Twitter: deepskieslab
- Repositories: 5
- Profile: https://github.com/deepskies
Building community and making discoveries since 2017
*The result of training a U-Net to denoise a stack of noisy Chinese characters. Note that the only input is the noisy data; no ground truth is necessary.*
## Images
The notebook [Intro to Calibration](notebooks/Intro%20to%20Calibration.ipynb) shows how to calibrate any traditional image denoising model, such as median filtering, wavelet thresholding, or non-local means. We use the excellent [scikit-image](www.scikit-image.org) implementations of these methods, and have submitted a PR to incorporate self-supervised calibration directly into the package. (Comments welcome on the [PR](https://github.com/scikit-image/scikit-image/pull/3824)!)
The notebook [Intro to Neural Nets](notebooks/Intro%20to%20Neural%20Nets.ipynb) shows how to train a denoising neural net using a self-supervised loss, on the simple example of MNIST digits. The notebook runs in less than a minute, on CPU, on a MacBook Pro. We implement this in [pytorch](www.pytorch.org).
Because the self-supervised loss is much easier to implement than the data loading, GPU management, logging, and architecture design required for handling any particular dataset, we recommend that you take any existing pipeline for your data and simply modify the training loop.
### Traditional Supervised Learning
```
for i, batch in enumerate(data_loader):
x, y = batch
output = model(x)
loss = loss_function(output, y)
```
### Self-Supervised Learning
```
from mask import Masker
masker = Masker()
for i, batch in enumerate(data_loader):
x, _ = batch
input, mask = masker.mask(noisy_images, i)
output = model(input)
loss = loss_function(output*mask, x*mask)
```
Dependencies are in the `environment.yml` file.
The remaining notebooks generate figures from the [paper](https://arxiv.org/abs/1901.11365).