xaibiomedical

Tensorflow Implementation for the paper Visual Interpretable and Explainable Deep Learning Models for Brain Tumor MRI and COVID-19 Chest X-ray Images

https://github.com/yusufbrima/xaibiomedical

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (14.5%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

Tensorflow Implementation for the paper Visual Interpretable and Explainable Deep Learning Models for Brain Tumor MRI and COVID-19 Chest X-ray Images

Basic Info
  • Host: GitHub
  • Owner: yusufbrima
  • License: apache-2.0
  • Language: Python
  • Default Branch: main
  • Size: 17.6 MB
Statistics
  • Stars: 1
  • Watchers: 1
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created almost 2 years ago · Last pushed over 1 year ago
Metadata Files
Readme License Citation

README.md

XAIBiomedical

TensorFlow implementation for the pre-print Visual Interpretable and Explainable Deep Learning Models for Brain Tumor MRI and COVID-19 Chest X-ray Images.

Description of image 1
Brain Tumor image samples
Description of image 2
Chest X-ray image samples

Description

This repository provides the official implementation of the above-mentioned paper. The directory structure is as follows: - Data: Contains a sub-directory (brainTumorDataPublic) for the dataset where the .mat files are found, as well as the README.txt and cvind.mat files. Description of the dataset is found in the README.txt. It also contain the Chest X-ray dataset along side its metadata. - Figures: Contains all plots generated from running the code in any specified graphics formats. - Models: Contains the trained and evaluated models, each in its sub-directory.

Getting Started

To get started, clone this repository using the following command: bash git clone https://github.com/yusufbrima/XAIBiomedical.git

Dependencies

All dependencies are listed in the requirements.txt file and can be installed with the following command: bash pip install -r requirements.txt Please note that the code was written in Python 3.9.12.

Installing

  1. Open your terminal and navigate to the cloned repository: bash cd XAIBiomedical

  2. Create a Conda environment and install all dependencies: bash conda create --name deepmed python=3.9.12

  3. Install dependencies bash pip install -r requirements.txt

    Get the datasets downloaded

  4. Download the Brain MRI dataset dataset which will be extracted to ./Data directory: bash python cli.py --name Download

  5. Download the Chest X-ray dataset and extract to ./Data directory.

Training and Visual Saliency Analyses

Make sure the script is executable. Run the following command in your terminal:

bash chmod +x run_script.sh Execute bash ./run_scripts.sh This will:

  1. Build (resize to 225x225 and standardize) the respective dataset input compressed numpy array also saved in the bash Data dir with a .npz extension
  2. Split the dataset into 80/10/10 for train, valid, and test sets
  3. train all 9 models: (VGG16,VGG19,ResNet50, Xception, ResNet50V2, InceptionV3, DenseNet121, EfficientNetB0, InceptionResNetV2) but can be customized to your needs. Each of these models is trained for 20 and 40 epochs on the Brain Tumor and Chest X-ray datasets respectively with batch size of 64. These configurations can be changed in the bash script, however, those where optimal values we found.

Empirical evaluation of saliency methods

Make sure the script is executable. Run the following command in your terminal: bash chmod +x run_saliency_experiments.sh Next, execute in the terminal:

bash ./run_saliency_experiments.sh This will call the Plot.py script which computes the Performance Information Curves (PICs) both Accuracy Information Curves (AICs) and Softmax Information Curves (SICs) for each of the following image-based saliency methods: (Guided Integrated Gradients, Vanilla Gradients, SmoothGrad, XRAI, GradCAM, GradCAM++, and ScoreCAM)

### Executing Program by individual runs

The program can be executed with various commands for different purposes: --name argument can take any of the following: Download, Process, Train, Evaluate, Saliency --ds argument takes either BrainTumor which is default or Covid --epochs argument is the number of epochs to train each model on the selected dataset. Default is 10 batch_size argument is the number of samples in a given mini-batch. Default is 32

  1. Download the Brain MRI dataset dataset: bash python cli.py --name Download --ds BrainTumor

  2. Preprocess the dataset: bash python cli.py --name Process --ds BrainTumor

  3. Train the models: bash python cli.py --name Train --ds BrainTumor

  4. Perform saliency analysis: bash python cli.py --name Saliency --ds BrainTumor This will compute the saliency visualizations for the top-n best performing models on the selected dataset. Default value for n is 3.

Sample results

Visual Explanability

Brain Tumor explanability

Brain Tumor explanability

Empirical Evaluation of Saliency Methods

Brain Tumor AIC Curves Brain Tumor SIC Curves

Chest X-ray AIC explanability Chest X-ray SIC explanability

Authors

Contributors and contact information: - Yusuf Brima

Version History

  • 0.1
    • Initial Release

License

This project is licensed under the MIT License.

Acknowledgments

Special thanks to the Pair Team for their well-written saliency codebase.

Also, thanks to the tf-keras-vis Team for their nicely written model visualization code. ```

Owner

  • Name: Yusuf Brima
  • Login: yusufbrima
  • Kind: user
  • Location: Osnabrück, Germany

Deep Representation Learning | Mathematical Causal Inference| Modelling | Computational Entrepreneurship

Citation (CITATION.cff)

@article{Brima2022VisualInterpretableExplainable,
  author = {Brima, Yusuf and Atemkeng, Marcellin},
  doi = {arXiv:2208.00953},
  journal={arXiv preprint arXiv:2208.00953},
  month = {9},
  number = {1},
  pages = {1--10},
  title = {{Visual Interpretable and Explainable Deep Learning Models for Brain Tumor MRI and COVID-19 Chest X-ray Images}},
  volume = {1},
  year = {2022}
}

GitHub Events

Total
Last Year

Dependencies

.github/workflows/python-package.yml actions
  • actions/checkout v4 composite
  • actions/setup-python v3 composite
requirements.txt pypi
  • Cartopy ==0.23.0
  • Deprecated ==1.2.14
  • astor ==0.8.1
  • astunparse ==1.6.3
  • celluloid ==0.2.0
  • cftime ==1.6.3
  • chardet ==5.2.0
  • contourpy ==1.2.1
  • cycler ==0.12.1
  • fonttools ==4.51.0
  • imageio ==2.34.1
  • imbalanced-learn ==0.12.2
  • imblearn ==0.0
  • importlib_resources ==6.4.0
  • joblib ==1.4.2
  • keras ==3.3.3
  • keras-nightly ==3.3.3.dev2024050903
  • kiwisolver ==1.4.5
  • lazy_loader ==0.4
  • libclang ==18.1.1
  • llvmlite ==0.42.0
  • lxml ==5.2.1
  • markdown-it-py ==3.0.0
  • matplotlib ==3.8.4
  • mdurl ==0.1.2
  • mistune ==3.0.2
  • mkl-service ==2.4.0
  • ml-dtypes ==0.3.2
  • namex ==0.0.8
  • netCDF4 ==1.6.5
  • networkx ==3.2.1
  • numba ==0.59.1
  • numdifftools ==0.9.41
  • numpy ==1.23.5
  • nvidia-cublas-cu12 ==12.3.4.1
  • nvidia-cuda-cupti-cu12 ==12.3.101
  • nvidia-cuda-nvcc-cu12 ==12.3.107
  • nvidia-cuda-nvrtc-cu12 ==12.3.107
  • nvidia-cuda-runtime-cu12 ==12.3.101
  • nvidia-cudnn-cu12 ==8.9.7.29
  • nvidia-cufft-cu12 ==11.0.12.1
  • nvidia-curand-cu12 ==10.3.4.107
  • nvidia-cusolver-cu12 ==11.5.4.101
  • nvidia-cusparse-cu12 ==12.2.0.103
  • nvidia-nccl-cu12 ==2.19.3
  • nvidia-nvjitlink-cu12 ==12.3.101
  • opencv-python ==4.9.0.80
  • optree ==0.11.0
  • pandas ==2.2.2
  • pandas-datareader ==0.10.0
  • patsy ==0.5.6
  • pickle-mixin ==1.0.2
  • pillow ==10.3.0
  • protobuf ==3.20.3
  • pyasn1-modules ==0.2.8
  • pyparsing ==3.1.2
  • pyproj ==3.6.1
  • pyshp ==2.3.1
  • python-resize-image ==1.1.20
  • pytz ==2024.1
  • requests-oauthlib ==1.3.0
  • rich ==13.7.1
  • saliency ==0.2.1
  • scikit-image ==0.22.0
  • scikit-learn ==1.4.2
  • seaborn ==0.13.2
  • shapely ==2.0.4
  • sklearn ==0.0
  • statsmodels ==0.14.2
  • tensorflow ==2.16.1
  • tensorflow-addons ==0.23.0
  • tensorflow-io-gcs-filesystem ==0.37.0
  • tensorrt ==10.0.1
  • tensorrt-cu12 ==10.0.1
  • tensorrt-cu12-bindings ==10.0.1
  • tensorrt-cu12-libs ==10.0.1
  • tf-keras-vis ==0.8.7
  • threadpoolctl ==3.5.0
  • tifffile ==2024.5.3
  • tqdm ==4.66.4
  • typeguard ==2.13.3
  • tzdata ==2024.1