Science Score: 49.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 2 DOI reference(s) in README -
✓Academic publication links
Links to: arxiv.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (13.4%) to scientific vocabulary
Repository
3D sphereoid viability analysis
Basic Info
- Host: GitHub
- Owner: armanilab
- Language: Jupyter Notebook
- Default Branch: master
- Size: 749 MB
Statistics
- Stars: 0
- Watchers: 0
- Forks: 0
- Open Issues: 0
- Releases: 0
Metadata Files
README.md
Segmentation Algorithm to Assess the ViabilitY of 3D spheroid slices (aka SAAVY)
SAAVY was created for the purpose of predicting the viability percentage of 3D tissue cultures, cystic spheroids specifically, according to brightfield microscopy plane slice images. SAAVY uses Mask R-CNN for instance, segmentation to detect the spheroids in the focal plane within a provided image. Morphology is taken into account through our training with both live and dead spheroids. Live spheroids are distinctly spherical with noticeable edges, whereas dead spheroids have a jagged outline from the apototic cell death. We based the viability algorithm on human expert assessment and measured the intensity of the spheroid as compared to the background. Spheroids that have higher viabilities are typically closer in intensity values to that of the background, on average. Further, we include artificial noise in the backgrounds of the images to increase SAAVY's tolerance in the case of noisy biological backgrounds (i.e. matrix protein deposits, matrices loaded with foreign materials, and/or co-cultured cells creating a background).
SAAVY outputs the viability percent, average spheroid size, total count of spheroids included in the analysis, the total percent area of the image analyzed, and the average intensity value for the background. Our current code outputs the averages of each image, but maintains the ability to output specific viabilities, sizes, and intensity values for each individual spheroid identified in a given image.
The following document includes instructions for using SAAVY using example data we provide (based on our manuscript, arXiv) and uploading your own data for training and analysis. Our example data is cycstic spheroids with clear and noisy backgrounds. The full image dataset is hosted on Zenodo. Instructions for training images specifically according to your spheroid type (if of a differing morphology) are included below in the 'Fine Tune Model' section.
Instructions for Use
Note: All proceeding steps require Conda installation.
Check for conda installation OR follow directions to install conda
Clone this repository using your device terminal or IDE of choice:
git clone https://github.com/armanilab/SAAVY.gitEnter the SAAVY directory in terminal:
cd SAAVYAll folders (inputs, outputs, training, etc.) must be in the SAAVY directory. The following code is written to call from the working directory.
3a. If you are following our example/using similar cycstic spheroids, download the model and save it to the SAAVY folder.
3b. If you are training your own images, skip this step.
Create virtual environment:
conda create -n torch python="3.9" conda activate torchInstall packages: If you are running MAC, you will need to install pytorch with the following command:
pip3 install torch torchvision torchaudio
Otherwise for WINDOWS:
// GPU install requires CUDA toolkit https://developer.nvidia.com/cuda-toolkit
conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia
//CPU only, slower
conda install pytorch torchvision torchaudio cpuonly -c pytorch
Other requirements (MAC & WINDOWS)
pip3 install matplotlib scikit-learn pillow tqdm pandas opencv-python
After install, please check for the following:
- Python = 3.9
- Pytorch >= 2.0
- Pillow >= 9.4.0
- matplotlib >= 3.7.1
- (Optional but highly recomended) cuda-toolkit = 11.8 (ONLY IF RUNNING NVIDIA GPU)
6a. If using our example images and training data, or if your samples are of cystic spheroid type, run SAAVY viability analysis using:
python predict.py --input "YOUR FOLDER HERE" --output "CREATE A FOLDER HERE" --model "torchFinal.pt"
For single spheroid add "--singleSpheroid" to your command And stop here - you're done!
Fine tune the model
If using your own images, of a cryptic spheroid type, follow the following steps -->
6b. Download the VIA image annotator 2.0.11
This will download a file to your computer with a name according to the version you downloaded (via-2.0.11)
Open the VIA folder and open the
via.htmlfile to run the program. It will show in a new browser window.Load images into VIA (add files button in the annotator window).
Images must be PNG of JPG format. We suggest opening images on your device and export from the viewer to PNG or JPG format. We used 30 images for our balanced training/validation image subset with an 80%/20% split.
- Create masks around the regions (spheroids/organoids) you are interested in having SAAVY analyze. Use the polygon tool to trace the edges of the spheroids of interest.
For example: 
Export as JSON. This will export the file to your default downloads folder/same as the Via Annotator Files. Go to the annotations menu and use the JSON dropdown option.

Rename the annotator JSON file according to which file type (training or validation)
You will have to do this twice: once for your training data, and once for your validation data. Repeat steps 8-11 for validation data. You should have two folders after, training and validation, each with their own images and annotation json files.
Move the annotator JSON file and training images into a training directory i.e. "trainingData" Folder
Move the validation images into a validation directory i.e. "validationData" Folder
Install packages for training script:
pip3 install pycocotools tensorboardRun
python training.py --training "TRAINING FOLDER" --validation "VALIDATION FOLDER" --training_json "TRAINING ANNOTATIONS JSON" --validation_json "VALIDATION ANNOTATIONS JSON"The model will be saved to your working directory to be used with the instructions 1-5 above
Owner
- Login: armanilab
- Kind: user
- Repositories: 1
- Profile: https://github.com/armanilab