https://github.com/choosehappy/quickannotator

An open-source digital pathology based rapid image annotation tool

https://github.com/choosehappy/quickannotator

Science Score: 49.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 1 DOI reference(s) in README
  • Academic publication links
    Links to: wiley.com
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (16.6%) to scientific vocabulary
Last synced: 6 months ago · JSON representation

Repository

An open-source digital pathology based rapid image annotation tool

Basic Info
  • Host: GitHub
  • Owner: choosehappy
  • License: bsd-3-clause-clear
  • Language: JavaScript
  • Default Branch: main
  • Size: 14.7 MB
Statistics
  • Stars: 82
  • Watchers: 2
  • Forks: 27
  • Open Issues: 37
  • Releases: 0
Created about 5 years ago · Last pushed 7 months ago
Metadata Files
Readme License

README.md

QuickAnnotator


Quick Annotator is an open-source digital pathology annotation tool.

QA user interface screenshot

Purpose


Machine learning approaches for segmentation of histologic primitives (e.g., cell nuclei) in digital pathology (DP) Whole Slide Images (WSI) require large numbers of exemplars. Unfortunately, annotating each object is laborious and often intractable even in moderately sized cohorts. The purpose of the quick annotator is to rapidly bootstrap annotation creation for digital pathology projects by helping identify images and small regions.

Because the classifier is likely to struggle by intentionally focusing in these areas, less pixel level annotations are needed, which significantly improves the efficiency of the users. Our approach involves updating a u-net model while the user provides annotations, so the model is then in real time used to produce. This allows the user to either accept or modify regions of the prediction.

Requirements


Tested with Python 3.8 and Chrome (errors have been reported with Firefox which are being addressed currently)

Requires: 1. Python 2. pip

And the following additional python package: 1. FlaskSQLAlchemy 2. scikitimage 3. scikitlearn 4. opencvpythonheadless 5. scipy 6. requests 7. SQLAlchemy 8. torch 9. torchvision 10.FlaskRestless 11. numpy 12. Flask 13. umap_learn 14. Pillow 15. tensorboard 16. ttach 17. albumentations 18. config

You can likely install the python requirements using something like (note python 3+ requirement): pip3 install -r requirements.txt Note: The requirements.txt under root directory of cuda version 11.

The library versions have been pegged to the current validated ones. Later versions are likely to work but may not allow for cross-site/version reproducibility

We received some feedback that users could installed torch. Here, we provide a detailed guide to install Torch

Torch's Installation

The general guides for installing Pytorch can be summarized as following: 1. Check your NVIDIA GPU Compute Capability @ https://developer.nvidia.com/cuda-gpus 2. Download CUDA Toolkit @ https://developer.nvidia.com/cuda-downloads 3. Install PyTorch command can be found @ https://pytorch.org/get-started/locally/

Docker & Singularity

Docker v.s. Singularity

Singularity is a container runtime, like Docker, but it starts from a very different place. It favors integration rather than isolation, while still preserving security restrictions on the container, and providing reproducible images.

Therefore, singularity container is more likely to be an environment where is set up to run QA. However, docker container is more like to be an application where an isolated Quick Annotator is built inside.

It is very common that user could specify a port number preallocated by other users and need to change the port number connecting to QA. When using Docker container, user needs to rebuild the image and start the container. When using Singularity container, user needs to change the port number in the config foler.

Docker requirements

Docker is a set of platform as a service products that use OS-level virtualization to deliver software in packages called containers. Containers are isolated from one another and bundle their own software, libraries and configuration files.

In order to use Docker version of QA, user needs: 1. Nvidia driver supporting cuda. See documentation, here. 2. Docker Engine. See documentation, here 3. Nvidia-docker https://github.com/NVIDIA/nvidia-docker

PS: Docker Desktop is an easy-to-install application for your Mac or Windows environment that enables you to build and share containerized applications and microservices. Docker Desktop includes Docker Engine, Docker CLI client, Docker Compose, Notary, Kubernetes, and Credential Helper.

Depending on your cuda version, we provide Dockerfiles for cuda_10 and cuda_11.

To start the server, run either: docker build -t quick_annotator -f cuda_10/Dockerfile . or docker build -t quick_annotator -f cuda_11/Dockerfile . from the QuickAnnotator folder.

When the docker image is done building, it can be run by typing:

docker run --gpus all -v /data/$CaseID/QuickAnnotator:/opt/QuickAnnotator -p 5555:5555 --shm-size=8G quick_annotator

In the above command, -v /data/$CaseID/QuickAnnotator:/opt/QuickAnnotator mounts the QA on host file system to the QA inside the container. /data/$CaseID/QuickAnnotator should be the QA path on your host file system, /opt/quick_annotator is the QA path inside the container, which is specified in the Dockerfile.

Note: This command will forward port 5555 from the computer to port 5555 of the container, where our flask server is running as specified in the config.ini. The port number should match the config of running QA on host file system.

Singularity requirements

Singularity provides a single universal on-ramp from developers’ workstations to local resources, the cloud, and all the way to edge.

In order to use Singulariy version of QA, user needs: - Nvidia driver supporting cuda. See documentation, here - Install Singularity, here

Depending on your cuda version, we provide Singularity Recipe files for cuda_10 and cuda_11.

To build the Singularity Image Format (SIF) of QA, users need to ask for --fakeroot privilege from the administrator. 1. Users need to set environment variable (the environment variables should be set to different locations according to use cases) export SINGULARITY_TMPDIR=/mnt/data/home/$CaseID/sing_cache export SINGULARITY_CACHEDIR=/mnt/data/home/$CaseID/sing_cache Note: The location for temporary directories defaults to /tmp. The temporary directory used during a build must be on a filesystem that has enough space to hold the entire container image, uncompressed, including any temporary files that are created and later removed during the build. You may need to set SINGULARITY_TMPDIR when building a large container on a system which has a small /tmp filesystem.

  1. To build SIF of QA, run either: singularity build --fakeroot --force /mnt/data/home/$CaseID/QATestSin10.sif cuda_10/Singularity10 or singularity build --fakeroot --force /mnt/data/home/$CaseID/singularityQA11.sif cuda_11/Singularity11 from the QuickAnnotator folder.

(Note: /mnt/data/home/$CaseID/singularityQA10.sif is the output directory, which could be modified based on users' preference. We recommend users to build the SIF files under data or scratch folder under the assumption that users run Singularity in a server.)

  1. When the SIF is done build, it can be run by: singularity run --bind /data/rxm723/QuickAnnotator:/opt/QuickAnnotator --nv /mnt/data/home/$CaseID/singularity10.sif, where the --nv enables GPU usage.

In the above command, --bind /data/rxm723/QuickAnnotator:/opt/QuickAnnotator mounts the QA on host file system to the QA inside the container. /data/$CaseID/QuickAnnotator should be the QA path on your host file system.

Note: This command will forward you to port 5555 by default, where is specified in the config.ini. If this port is occupied on your machine, for example by another user or process, you will need to change in the config.ini of your QuickAnnotator.

Note: We recommend users to confirm that Nvidia support is successfully enabled before running a singularity container: user should use nvidia-smi command inside the container. singularity shell --nv /mnt/data/home/$CaseID/singularity10.sif Nvidia-smi It is not necessary to specify bind path when checking Nvidia enabling when using singularity shell.

Basic Usage


see UserManual for a demo

Run

E:\Study\Research\QA\qqqqq\test1\quick_annotator>python QA.py By default, it will start up on localhost:5555. Note that 5555 is the port number setting in config.ini and user should confirm {port number} is not pre-occupied by other users on the host.

Warning: virtualenv will not work with paths that have spaces in them, so make sure the entire path to env/ is free of spaces.

Config Sections

There are many modular functions in QA whose behaviors could be adjusted by hyper-parameters. These hyper-parameters can be set in the config.ini file - [common] - [flask] - [cuda] - [sqlalchemy] - [pooling] - [trainae] - [traintl] - [makepatches] - [makeembed] - [get_prediction] - [frontend] - [superpixel]

Advanced Usage


See wiki

Citation


Read the related paper in Journal of Pathology - Clinical Research: Quick Annotator: an open-source digital pathology based rapid image annotation tool

Please use below to cite this paper if you find this repository useful or if you use the software shared here in your research. @misc{miao2021quick, title={Quick Annotator: an open-source digital pathology based rapid image annotation tool}, author={Runtian Miao and Robert Toth and Yu Zhou and Anant Madabhushi and Andrew Janowczyk}, year={2021}, journal = {The Journal of Pathology: Clinical Research}, issn = {2056-4538} }

Frequently Asked Questions

See FAQ

Owner

  • Login: choosehappy
  • Kind: user

GitHub Events

Total
  • Create event: 3
  • Commit comment event: 33
  • Issues event: 43
  • Watch event: 5
  • Member event: 1
  • Issue comment event: 103
  • Push event: 74
  • Pull request review event: 237
  • Pull request review comment event: 292
  • Pull request event: 59
Last Year
  • Create event: 3
  • Commit comment event: 33
  • Issues event: 43
  • Watch event: 5
  • Member event: 1
  • Issue comment event: 103
  • Push event: 74
  • Pull request review event: 237
  • Pull request review comment event: 292
  • Pull request event: 59

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 16
  • Total pull requests: 31
  • Average time to close issues: about 1 month
  • Average time to close pull requests: 9 days
  • Total issue authors: 2
  • Total pull request authors: 2
  • Average comments per issue: 2.44
  • Average comments per pull request: 0.87
  • Merged pull requests: 19
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 16
  • Pull requests: 31
  • Average time to close issues: about 1 month
  • Average time to close pull requests: 9 days
  • Issue authors: 2
  • Pull request authors: 2
  • Average comments per issue: 2.44
  • Average comments per pull request: 0.87
  • Merged pull requests: 19
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • jacksonjacobs1 (26)
  • dig1998 (1)
  • choosehappy (1)
Pull Request Authors
  • jacksonjacobs1 (30)
  • nanli-emory (4)
  • naguileraleal (1)
Top Labels
Issue Labels
enhancement (20) v2.0 (20) refactor (1) high priority (1) blocker (1) bug (1)
Pull Request Labels
enhancement (1) v2.0 (1)

Dependencies

cuda_11/requirements.txt pypi
  • Flask ==1.1.2
  • Flask_Restless ==0.17.0
  • Flask_SQLAlchemy ==2.4.4
  • Jinja2 ==3.0.3
  • Pillow ==8.1.2
  • SQLAlchemy ==1.3.22
  • Werkzeug ==2.0.3
  • albumentations ==0.4.3
  • config ==0.4.2
  • itsdangerous ==1.1.0
  • numpy ==1.20.3
  • opencv-python-headless ==4.5.3.56
  • requests ==2.25.1
  • scikit-image ==0.18.1
  • scikit-learn ==0.24.0
  • scipy ==1.6.0
  • tensorboard ==2.4.1
  • torch ==1.8.1
  • torchaudio ===0.8.1
  • torchvision ==0.9.1
  • ttach ==0.0.2
  • umap-learn ==0.5.1
Dockerfile docker
  • nvidia/cuda 11.0.3-cudnn8-devel-ubuntu20.04 build
cuda_11/Dockerfile docker
  • nvidia/cuda 11.0.3-cudnn8-devel-ubuntu20.04 build
cuda_12/Dockerfile docker
  • nvidia/cuda 12.1.0-cudnn8-devel-ubuntu20.04 build
cuda_12/requirements.txt pypi
  • Flask ==1.1.2
  • Flask_Restless ==0.17.0
  • Flask_SQLAlchemy ==2.4.4
  • Jinja2 ==3.0.3
  • Pillow ==8.1.2
  • SQLAlchemy ==1.3.22
  • Werkzeug ==2.0.3
  • albumentations ==0.4.3
  • config ==0.4.2
  • itsdangerous ==1.1.0
  • networkx <=3.1
  • numpy <=1.22
  • opencv-python-headless ==4.5.3.56
  • protobuf <3.21
  • requests ==2.25.1
  • scikit-image ==0.18.1
  • scikit-learn ==0.24.0
  • scipy ==1.6.0
  • tensorboard ==2.4.1
  • torch ==2.1.0
  • torchaudio ===0.8.1
  • torchvision ==0.16.0
  • ttach ==0.0.2
  • umap-learn ==0.5.1
requirements_cpuonly.txt pypi
  • Flask ==1.1.2
  • Flask_Restless ==0.17.0
  • Flask_SQLAlchemy ==2.4.4
  • Jinja2 ==3.0.3
  • Pillow ==8.1.2
  • SQLAlchemy ==1.3.22
  • Werkzeug ==2.0.3
  • albumentations ==0.4.3
  • config ==0.4.2
  • itsdangerous ==1.1.0
  • numpy <=1.22
  • opencv-python-headless ==4.5.3.56
  • protobuf <3.21
  • requests ==2.25.1
  • scikit-image ==0.18.1
  • scikit-learn ==0.24.0
  • scipy ==1.6.0
  • tensorboard ==2.4.1
  • torch *
  • torchaudio *
  • torchvision *
  • ttach ==0.0.2
  • umap-learn ==0.5.1