viscy

computer vision models for single-cell phenotyping

https://github.com/mehta-lab/viscy

Science Score: 67.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 9 DOI reference(s) in README
  • Academic publication links
    Links to: arxiv.org, biorxiv.org, nature.com, zenodo.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (14.8%) to scientific vocabulary

Keywords

bioimage-analysis computer-vision image-translation machine-learning representation-learning
Last synced: 6 months ago · JSON representation ·

Repository

computer vision models for single-cell phenotyping

Basic Info
Statistics
  • Stars: 67
  • Watchers: 6
  • Forks: 10
  • Open Issues: 44
  • Releases: 15
Topics
bioimage-analysis computer-vision image-translation machine-learning representation-learning
Created over 2 years ago · Last pushed 6 months ago
Metadata Files
Readme Contributing License Citation

README.md

VisCy

Python package index PyPI monthly downloads Total downloads GitHub contributors GitHub Repo stars GitHub forks SPEC 0 — Minimum Supported Dependencies DOI

VisCy (blend of vision and cyto) is a deep learning pipeline for training and deploying computer vision models for image-based phenotyping at single-cell resolution.

This repository provides a pipeline for the following.

  • Image translation
    • Robust virtual staining of landmark organelles with Cytoland
  • Image representation learning
    • Self-supervised learning of the cell state and organelle phenotypes with DynaCLR
  • Semantic segmentation
    • Supervised learning of of cell state (e.g. state of infection)

Note: VisCy is under active development. While we strive to maintain stability, the main branch may occasionally be updated with break-incompatible changes which are subsequently shipped in releases following semantic versioning. Please choose a stable release from PyPI for production use.

Cytoland (Robust Virtual Staining)

Demo Open in Spaces

Try the 2D virtual staining demo of cell nuclei and membrane from label-free images on Hugging Face.

Virtual Staining App Demo

Cytoland @ Virtual Cells Platform

Cytoland models are accessible via the Chan Zuckerberg Initiative's Virtual Cells Platform. Notebooks are available as pre-rendered pages or on Colab:

Tutorials

  • Virtual staining exercise: Notebook illustrating how to use VisCy to train, predict and evaluate the VSCyto2D model. This notebook was developed for the DL@MBL2024 course and uses UNeXt2 architecture.

  • Image translation demo: Fluorescence images can be predicted from label-free images. Can we predict label-free image from fluorescence? Find out using this notebook.

  • Training Virtual Staining Models via CLI: Instructions for how to train and run inference on VisCy's virtual staining models (VSCyto3D, VSCyto2D and VSNeuromast).

Gallery

Below are some examples of virtually stained images (click to play videos). See the full gallery here.

| VSCyto3D | VSNeuromast | VSCyto2D | |:---:|:---:|:---:| | HEK293T | Neuromast | A549 |

References

The Cytoland models and training protocols are reported in our recent paper on robust virtual staining in Nature Machine Intelligence.

This package evolved from the TensorFlow version of virtual staining pipeline, which we reported in this paper in 2020 in eLife.

Liu, Hirata-Miyasaki et al., 2025

  @article{liu_robust_2025,
      title = {Robust virtual staining of landmark organelles with {Cytoland}},
      copyright = {2025 The Author(s)},
      issn = {2522-5839},
      url = {https://www.nature.com/articles/s42256-025-01046-2},
      doi = {10.1038/s42256-025-01046-2},
      abstract = {Correlative live-cell imaging of landmark organelles—such as nuclei, nucleoli, cell membranes, nuclear envelope and lipid droplets—is critical for systems cell biology and drug discovery. However, achieving this with molecular labels alone remains challenging. Virtual staining of multiple organelles and cell states from label-free images with deep neural networks is an emerging solution. Virtual staining frees the light spectrum for imaging molecular sensors, photomanipulation or other tasks. Current methods for virtual staining of landmark organelles often fail in the presence of nuisance variations in imaging, culture conditions and cell types. Here we address this with Cytoland, a collection of models for robust virtual staining of landmark organelles across diverse imaging parameters, cell states and types. These models were trained with self-supervised and supervised pre-training using a flexible convolutional architecture (UNeXt2) and augmentations inspired by image formation of light microscopes. Cytoland models enable virtual staining of nuclei and membranes across multiple cell types—including human cell lines, zebrafish neuromasts, induced pluripotent stem cells (iPSCs) and iPSC-derived neurons—under a range of imaging conditions. We assess models using intensity, segmentation and application-specific measurements obtained from virtually and experimentally stained nuclei and membranes. These models rescue missing labels, correct non-uniform labelling and mitigate photobleaching. We share multiple pre-trained models, open-source software (VisCy) for training, inference and deployment, and the datasets.},
      language = {en},
      urldate = {2025-06-23},
      journal = {Nature Machine Intelligence},
      author = {Liu, Ziwen and Hirata-Miyasaki, Eduardo and Pradeep, Soorya and Rahm, Johanna V. and Foley, Christian and Chandler, Talon and Ivanov, Ivan E. and Woosley, Hunter O. and Lee, See-Chi and Khadka, Sudip and Lao, Tiger and Balasubramanian, Akilandeswari and Marreiros, Rita and Liu, Chad and Januel, Camille and Leonetti, Manuel D. and Aviner, Ranen and Arias, Carolina and Jacobo, Adrian and Mehta, Shalin B.},
      month = jun,
      year = {2025},
      note = {Publisher: Nature Publishing Group},
      pages = {1--15},
      }
  
Guo, Yeh, Folkesson et al., 2020

  @article {10.7554/eLife.55502,
      article_type = {journal},
      title = {Revealing architectural order with quantitative label-free imaging and deep learning},
      author = {Guo, Syuan-Ming and Yeh, Li-Hao and Folkesson, Jenny and Ivanov, Ivan E and Krishnan, Anitha P and Keefe, Matthew G and Hashemi, Ezzat and Shin, David and Chhun, Bryant B and Cho, Nathan H and Leonetti, Manuel D and Han, May H and Nowakowski, Tomasz J and Mehta, Shalin B},
      editor = {Forstmann, Birte and Malhotra, Vivek and Van Valen, David},
      volume = 9,
      year = 2020,
      month = {jul},
      pub_date = {2020-07-27},
      pages = {e55502},
      citation = {eLife 2020;9:e55502},
      doi = {10.7554/eLife.55502},
      url = {https://doi.org/10.7554/eLife.55502},
      keywords = {label-free imaging, inverse algorithms, deep learning, human tissue, polarization, phase},
      journal = {eLife},
      issn = {2050-084X},
      publisher = {eLife Sciences Publications, Ltd},
      }
  

Library of Virtual Staining (VS) Models

The robust virtual staining models (i.e VSCyto2D, VSCyto3D, VSNeuromast), and fine-tuned models can be found here

DynaCLR (Embedding Cell Dynamics via Contrastive Learning of Representations)

DynaCLR is a self-supervised method for learning robust and temporally-regularized representations of cell and organelle dynamics from time-lapse microscopy using contrastive learning. It supports diverse downstream biological tasks -- including cell state classification with efficient human annotations, knowledge distillation across fluorescence and label-free imaging channels, and alignment of cell state dynamics.

Preprint

DynaCLR on arXiv:

DynaCLR schematic

Demo

  • DynaCLR demos

  • Example test dataset, model checkpoint, and predictions can be found here.

  • See tutorial on exploration of learned embeddings with napari-iohub here.

Installation

  1. We recommend using a new Conda/virtual environment.

    ```sh conda create --name viscy python=3.11

    OR specify a custom path since the dependencies are large:

    conda create --prefix /path/to/conda/envs/viscy python=3.11

    ```

  2. Install a released version of VisCy from PyPI:

    sh pip install viscy

    If evaluating virtually stained images for segmentation tasks, install additional dependencies:

    sh pip install "viscy[metrics]"

    Visualizing the model architecture requires visual dependencies:

    sh pip install "viscy[visual]"

  3. Verify installation by accessing the CLI help message:

    sh viscy --help

For development installation, see the contributing guide.

Additional Notes

The pipeline is built using the PyTorch Lightning framework. The iohub library is used for reading and writing data in OME-Zarr format.

The full functionality is tested on Linux x86_64 with NVIDIA Ampere/Hopper GPUs (CUDA 12.6). Some features (e.g. mixed precision and distributed training) may not be available with other setups, see PyTorch documentation for details.

Owner

  • Name: Computational Microscopy Platform (Mehta Lab), CZ Biohub
  • Login: mehta-lab
  • Kind: organization
  • Location: United States of America

Citation (CITATION.cff)

# This CITATION.cff file was generated with cffinit.
# Visit https://bit.ly/cffinit to generate yours today!

cff-version: 1.2.0
title: VisCy
message: >-
  If you use this software, please cite it using the
  metadata from this file.
type: software
authors:
  - given-names: Ziwen
    family-names: Liu
    email: ziwen.liu@czbiohub.org
    affiliation: Chan Zuckerberg Biohub San Francisco
    orcid: "https://orcid.org/0000-0001-7482-1299"
  - given-names: Eduardo
    family-names: Hirata-Miyasaki
    affiliation: Chan Zuckerberg Biohub San Francisco
    orcid: "https://orcid.org/0000-0002-1016-2447"
  - given-names: Christian
    family-names: Foley
  - given-names: Soorya
    family-names: Pradeep
    affiliation: Chan Zuckerberg Biohub San Francisco
    orcid: "https://orcid.org/0000-0002-0926-1480"
  - given-names: Alishba
    family-names: Imran
    affiliation: Chan Zuckerberg Biohub San Francisco
    orcid: "https://orcid.org/0009-0003-7049-355X"
  - given-names: Shalin
    family-names: Mehta
    affiliation: Chan Zuckerberg Biohub San Francisco
    orcid: "https://orcid.org/0000-0002-2542-3582"
repository-code: "https://github.com/mehta-lab/VisCy"
url: "https://github.com/mehta-lab/VisCy"
abstract: computer vision models for single-cell phenotyping
keywords:
  - machine-learning
  - computer-vision
  - bioimage-analysis
  - image-translation
  - representation-learning
license: BSD-3-Clause

GitHub Events

Total
  • Fork event: 4
  • Create event: 82
  • Release event: 7
  • Issues event: 32
  • Watch event: 30
  • Delete event: 55
  • Member event: 3
  • Issue comment event: 151
  • Push event: 465
  • Gollum event: 6
  • Pull request event: 135
  • Pull request review comment event: 164
  • Pull request review event: 219
Last Year
  • Fork event: 4
  • Create event: 82
  • Release event: 7
  • Issues event: 32
  • Watch event: 30
  • Delete event: 55
  • Member event: 3
  • Issue comment event: 151
  • Push event: 465
  • Gollum event: 6
  • Pull request event: 135
  • Pull request review comment event: 164
  • Pull request review event: 219

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 72
  • Total pull requests: 252
  • Average time to close issues: 3 months
  • Average time to close pull requests: 11 days
  • Total issue authors: 11
  • Total pull request authors: 9
  • Average comments per issue: 2.03
  • Average comments per pull request: 1.01
  • Merged pull requests: 163
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 17
  • Pull requests: 98
  • Average time to close issues: 26 days
  • Average time to close pull requests: 10 days
  • Issue authors: 5
  • Pull request authors: 7
  • Average comments per issue: 1.29
  • Average comments per pull request: 0.97
  • Merged pull requests: 47
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • ziw-liu (28)
  • mattersoflight (17)
  • Soorya19Pradeep (10)
  • edyoshikun (8)
  • alishbaimran (3)
  • esgomezm (1)
  • shannonhandley (1)
  • talonchandler (1)
  • Christianfoley (1)
  • arteys (1)
  • JohannaRahm (1)
Pull Request Authors
  • ziw-liu (149)
  • edyoshikun (53)
  • mattersoflight (27)
  • alishbaimran (13)
  • Soorya19Pradeep (4)
  • duopeng (2)
  • melissawm (2)
  • ritvikvasan (1)
  • esgomezm (1)
Top Labels
Issue Labels
enhancement (8) bug (6) representation (6) documentation (4) maintenance (4) question (3) translation (2) help wanted (1)
Pull Request Labels
enhancement (44) documentation (41) maintenance (27) representation (22) bug (17) translation (14) CI (9) breaking (8) wontfix (1)

Dependencies

.github/workflows/pr.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
  • chartboost/ruff-action v1 composite
  • psf/black stable composite
pyproject.toml pypi
  • iohub ==0.1.0.dev3
  • jsonargparse [signatures]>=4.20.1
  • lightning >=2.0.1
  • matplotlib *
  • monai >=1.2.0
  • scikit-image >=0.19.2
  • tensorboard >=2.13.0
  • torch >=2.0.0
  • torchvision >=0.15.1