https://github.com/bioinfomachinelearning/dips-plus

The Enhanced Database of Interacting Protein Structures for Interface Prediction

https://github.com/bioinfomachinelearning/dips-plus

Science Score: 67.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 11 DOI reference(s) in README
  • Academic publication links
    Links to: arxiv.org, nature.com, zenodo.org
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (12.6%) to scientific vocabulary

Keywords

bioinformatics datasets deep-learning machine-learning proteins
Last synced: 5 months ago · JSON representation ·

Repository

The Enhanced Database of Interacting Protein Structures for Interface Prediction

Basic Info
Statistics
  • Stars: 50
  • Watchers: 1
  • Forks: 8
  • Open Issues: 5
  • Releases: 3
Topics
bioinformatics datasets deep-learning machine-learning proteins
Created over 4 years ago · Last pushed 6 months ago
Metadata Files
Readme License Citation

README.md

# DIPS-Plus The Enhanced Database of Interacting Protein Structures for Interface Prediction [![Paper](http://img.shields.io/badge/paper-arxiv.2106.04362-B31B1B.svg)](https://www.nature.com/articles/s41597-023-02409-3) [![CC BY 4.0][cc-by-shield]][cc-by] [![Primary Data DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.5134732.svg)](https://doi.org/10.5281/zenodo.5134732) [![Supplementary Data DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.8140981.svg)](https://doi.org/10.5281/zenodo.8140981) [comment]: <> ([![Conference](http://img.shields.io/badge/NeurIPS-2021-4b44ce.svg)](https://papers.nips.cc/book/advances-in-neural-information-processing-systems-35-2021)) [![OMol25 on Hugging Face](https://img.shields.io/badge/-OMol25-FDEE21?style=for-the-badge&logo=HuggingFace&logoColor=black)](https://huggingface.co/facebook/OMol25) [](https://pypi.org/project/DIPS-Plus/)

Versioning

  • Version 1.0.0: Initial release of DIPS-Plus and DB5-Plus (DOI: 10.5281/zenodo.4815267)
  • Version 1.1.0: Minor updates to DIPS-Plus and DB5-Plus' tar archives (DOI: 10.5281/zenodo.5134732)
    • DIPS-Plus' final 'raw' tar archive now includes standardized 80%-20% lists of filenames for training and validation, respectively
    • DB5-Plus' final 'raw' tar archive now includes (optional) standardized lists of filenames for training and validation, respectively
    • DB5-Plus' final 'raw' tar archive now also includes a corrected (i.e. de-duplicated) list of filenames for its 55 test complexes
    • Benchmark results included in our paper were run after this issue was resolved
    • However, if you ran experiments using DB5-Plus' filename list for its test complexes, please re-run them using the latest list
  • Version 1.2.0: Minor additions to DIPS-Plus tar archives, including new residue-level intrinsic disorder region annotations and raw Jackhmmer-small BFD MSAs (Supplementary Data DOI: 10.5281/zenodo.8071136)
  • Version 1.3.0: Minor additions to DIPS-Plus tar archives, including new FoldSeek-based structure-focused training and validation splits, residue-level (scalar) disorder propensities, and a Graphein-based featurization pipeline (Supplementary Data DOI: 10.5281/zenodo.8140981)

How to set up

First, download Mamba (if not already downloaded): bash wget "https://github.com/conda-forge/miniforge/releases/latest/download/Mambaforge-$(uname)-$(uname -m).sh" bash Mambaforge-$(uname)-$(uname -m).sh # Accept all terms and install to the default location rm Mambaforge-$(uname)-$(uname -m).sh # (Optionally) Remove installer after using it source ~/.bashrc # Alternatively, one can restart their shell session to achieve the same result

Then, create and configure Mamba environment:

```bash

Clone project:

git clone https://github.com/BioinfoMachineLearning/DIPS-Plus cd DIPS-Plus

Create Conda environment using local 'environment.yml' file:

mamba env create -f environment.yml conda activate DIPS-Plus # Note: One still needs to use conda to (de)activate environments

Install local project as package:

pip3 install -e . ```

To install PSAIA for feature generation, install GCC 10 for PSAIA:

```bash

Install GCC 10 for Ubuntu 20.04:

sudo apt install software-properties-common sudo add-apt-repository ppa:ubuntu-toolchain-r/ppa sudo apt update sudo apt install gcc-10 g++-10

Or install GCC 10 for Arch Linux/Manjaro:

yay -S gcc10 ```

Then install QT4 for PSAIA:

```bash

Install QT4 for Ubuntu 20.04:

sudo add-apt-repository ppa:rock-core/qt4 sudo apt update sudo apt install libqt4* libqtcore4 libqtgui4 libqtwebkit4 qt4* libxext-dev

Or install QT4 for Arch Linux/Manjaro:

yay -S qt4 ```

Conclude by compiling PSAIA from source:

```bash

Select the location to install the software:

MY_LOCAL=~/Programs

Download and extract PSAIA's source code:

mkdir "$MYLOCAL" cd "$MYLOCAL" wget http://complex.zesoi.fer.hr/data/PSAIA-1.0-source.tar.gz tar -xvzf PSAIA-1.0-source.tar.gz

Compile PSAIA (i.e., a GUI for PSA):

cd PSAIA1.0source/make/linux/psaia/ qmake-qt4 psaia.pro make

Compile PSA (i.e., the protein structure analysis (PSA) program):

cd ../psa/ qmake-qt4 psa.pro make

Compile PIA (i.e., the protein interaction analysis (PIA) program):

cd ../pia/ qmake-qt4 pia.pro make

Test run any of the above-compiled programs:

cd "$MYLOCAL"/PSAIA1.0_source/bin/linux

Test run PSA inside a GUI:

./psaia/psaia

Test run PIA through a terminal:

./pia/pia

Test run PSA through a terminal:

./psa/psa ```

Lastly, install Docker following the instructions from https://docs.docker.com/engine/install/

How to generate protein feature inputs

In our feature generation notebook, we provide examples of how users can generate the protein features described in our accompanying manuscript for individual protein inputs.

How to use data

In our data usage notebook, we provide examples of how users might use DIPS-Plus (or DB5-Plus) for downstream analysis or prediction tasks. For example, to train a new NeiA model with DB5-Plus as its cross-validation dataset, first download DB5-Plus' raw files and process them via the data_usage notebook:

```bash mkdir -p project/datasets/DB5/final wget https://zenodo.org/record/5134732/files/finalrawdb5.tar.gz -O project/datasets/DB5/final/finalrawdb5.tar.gz tar -xzf project/datasets/DB5/final/finalrawdb5.tar.gz -C project/datasets/DB5/final/

To process these raw files for training and subsequently train a model:

python3 notebooks/data_usage.py ```

How to split data using FoldSeek

We provide users with the ability to perform structure-based splits of the complexes in DIPS-Plus using FoldSeek. This script is designed to allow users to customize how stringent one would like FoldSeek's searches to be for structure-based splitting. Moreover, we provide standardized structure-based splits of DIPS-Plus' complexes in the corresponding supplementary Zenodo data record.

How to featurize DIPS-Plus complexes using Graphein

In the new graph featurization script, we provide an example of how users may install new Expasy protein scale features using the Graphein library. The script is designed to be amenable to simple user customization such that users can use this script to insert arbitrary new Graphein-based features into each DIPS-Plus complex's pair file, for downstream tasks.

Standard DIPS-Plus directory structure

DIPS-Plus │ └───project │ └───datasets │ └───DB5 │ │ │ └───final │ │ │ │ │ └───processed # task-ready features for each dataset example │ │ │ │ │ └───raw # generic features for each dataset example │ │ │ └───interim │ │ │ │ │ └───complexes # metadata for each dataset example │ │ │ │ │ └───external_feats # features curated for each dataset example using external tools │ │ │ │ │ └───pairs # pair-wise features for each dataset example │ │ │ └───raw # raw PDB data downloads for each dataset example │ └───DIPS │ └───filters # filters to apply to each (un-pruned) dataset example │ └───final │ │ │ └───processed # task-ready features for each dataset example │ │ │ └───raw # generic features for each dataset example │ └───interim │ │ │ └───complexes # metadata for each dataset example │ │ │ └───external_feats # features curated for each dataset example using external tools │ │ │ └───pairs-pruned # filtered pair-wise features for each dataset example │ │ │ └───parsed # pair-wise features for each dataset example after initial parsing │ └───raw │ └───pdb # raw PDB data downloads for each dataset example

How to compile DIPS-Plus from scratch

Retrieve protein complexes from the RCSB PDB and build out directory structure:

```bash

Remove all existing training/testing sample lists

rm project/datasets/DIPS/final/raw/pairs-postprocessed.txt project/datasets/DIPS/final/raw/pairs-postprocessed-train.txt project/datasets/DIPS/final/raw/pairs-postprocessed-val.txt project/datasets/DIPS/final/raw/pairs-postprocessed-test.txt

Create data directories (if not already created):

mkdir project/datasets/DIPS/raw project/datasets/DIPS/raw/pdb project/datasets/DIPS/interim project/datasets/DIPS/interim/pairs-pruned project/datasets/DIPS/interim/external_feats project/datasets/DIPS/final project/datasets/DIPS/final/raw project/datasets/DIPS/final/processed

Download the raw PDB files:

rsync -rlpt -v -z --delete --port=33444 --include='.gz' --include='.xz' --include='/' --exclude '' \ rsync.rcsb.org::ftp_data/biounit/coordinates/divided/ project/datasets/DIPS/raw/pdb

Extract the raw PDB files:

python3 project/datasets/builder/extractrawpdbgzarchives.py project/datasets/DIPS/raw/pdb

Process the raw PDB data into associated pair files:

python3 project/datasets/builder/makedataset.py project/datasets/DIPS/raw/pdb project/datasets/DIPS/interim --numcpus 28 --source_type rcsb --bound

Apply additional filtering criteria:

python3 project/datasets/builder/prunepairs.py project/datasets/DIPS/interim/pairs project/datasets/DIPS/filters project/datasets/DIPS/interim/pairs-pruned --numcpus 28

Generate externally-sourced features:

python3 project/datasets/builder/generatepsaiafeatures.py "$PSAIADIR" "$PROJDIR"/project/datasets/builder/psaiaconfigfiledips.txt "$PROJDIR"/project/datasets/DIPS/raw/pdb "$PROJDIR"/project/datasets/DIPS/interim/parsed "$PROJDIR"/project/datasets/DIPS/interim/pairs-pruned "$PROJDIR"/project/datasets/DIPS/interim/externalfeats --sourcetype rcsb python3 project/datasets/builder/generatehhsuitefeatures.py "$PROJDIR"/project/datasets/DIPS/interim/parsed "$PROJDIR"/project/datasets/DIPS/interim/pairs-pruned "$HHSUITEDB" "$PROJDIR"/project/datasets/DIPS/interim/externalfeats --numcpujobs 4 --numcpusperjob 8 --numiter 2 --sourcetype rcsb --writefile # Note: After this, one needs to re-run this command with `--readfile` instead

Generate multiple sequence alignments (MSAs) using a smaller sequence database (if not already created using the standard BFD):

DOWNLOADDIR="$HHSUITEDBDIR" && ROOTDIR="${DOWNLOADDIR}/smallbfd" && SOURCEURL="https://storage.googleapis.com/alphafold-databases/reduceddbs/bfd-firstnonconsensussequences.fasta.gz" && BASENAME=$(basename "${SOURCEURL}") && mkdir --parents "${ROOTDIR}" && aria2c "${SOURCEURL}" --dir="${ROOTDIR}" && pushd "${ROOTDIR}" && gunzip "${ROOTDIR}/${BASENAME}" && popd # e.g., Download the small BFD python3 project/datasets/builder/generatehhsuitefeatures.py "$PROJDIR"/project/datasets/DIPS/interim/parsed "$PROJDIR"/project/datasets/DIPS/interim/pairs-pruned "$HHSUITEDBDIR"/smallbfd "$PROJDIR"/project/datasets/DIPS/interim/externalfeats --numcpujobs 4 --numcpusperjob 8 --numiter 2 --sourcetype rcsb --generatemsaonly --writefile # Note: After this, one needs to re-run this command with `--readfile` instead

Identify interfaces within intrinsically disordered regions (IDRs)

(1) Pull down the Docker image for flDPnn

docker pull docker.io/sinaghadermarzi/fldpnn

(2) For all sequences in the dataset, predict which interface residues reside within IDRs

python3 project/datasets/builder/annotateidrinterfaces.py "$PROJDIR"/project/datasets/DIPS/final/raw --num_cpus 16

Add new features to the filtered pairs, ensuring that the pruned pairs' original PDB files are stored locally for DSSP:

python3 project/datasets/builder/downloadmissingprunedpairpdbs.py "$PROJDIR"/project/datasets/DIPS/raw/pdb "$PROJDIR"/project/datasets/DIPS/interim/pairs-pruned --numcpus 32 --rank "$1" --size "$2" python3 project/datasets/builder/postprocessprunedpairs.py "$PROJDIR"/project/datasets/DIPS/raw/pdb "$PROJDIR"/project/datasets/DIPS/interim/pairs-pruned "$PROJDIR"/project/datasets/DIPS/interim/externalfeats "$PROJDIR"/project/datasets/DIPS/final/raw --num_cpus 32

Partition dataset filenames, aggregate statistics, and impute missing features

python3 project/datasets/builder/partitiondatasetfilenames.py "$PROJDIR"/project/datasets/DIPS/final/raw --sourcetype rcsb --filterbyatomcount True --maxatomcount 17500 --rank "$1" --size "$2" python3 project/datasets/builder/collectdatasetstatistics.py "$PROJDIR"/project/datasets/DIPS/final/raw --rank "$1" --size "$2" python3 project/datasets/builder/logdatasetstatistics.py "$PROJDIR"/project/datasets/DIPS/final/raw --rank "$1" --size "$2" python3 project/datasets/builder/imputemissingfeaturevalues.py "$PROJDIR"/project/datasets/DIPS/final/raw --imputeatomfeatures False --advancedlogging False --num_cpus 32 --rank "$1" --size "$2"

Optionally convert each postprocessed (final 'raw') complex into a pair of DGL graphs (final 'processed') with labels

python3 project/datasets/builder/convertcomplexestographs.py "$PROJDIR"/project/datasets/DIPS/final/raw "$PROJDIR"/project/datasets/DIPS/final/processed --numcpus 32 --edgedistcutoff 15.0 --edgelimit 5000 --selfloops True --rank "$1" --size "$2" ```

How to assemble DB5-Plus

Fetch prepared protein complexes from Dataverse:

```bash

Download the prepared DB5 files:

wget -O project/datasets/DB5.tar.gz https://dataverse.harvard.edu/api/access/datafile/:persistentId?persistentId=doi:10.7910/DVN/H93ZKK/BXXQCG

Extract downloaded DB5 archive:

tar -xzf project/datasets/DB5.tar.gz --directory project/datasets/

Remove (now) redundant DB5 archive and other miscellaneous files:

rm project/datasets/DB5.tar.gz project/datasets/DB5/.README.swp rm -rf project/datasets/DB5/interim project/datasets/DB5/processed

Create relevant interim and final data directories:

mkdir project/datasets/DB5/interim project/datasets/DB5/interim/external_feats mkdir project/datasets/DB5/final project/datasets/DB5/final/raw project/datasets/DB5/final/processed

Construct DB5 dataset pairs:

python3 project/datasets/builder/makedataset.py "$PROJDIR"/project/datasets/DB5/raw "$PROJDIR"/project/datasets/DB5/interim --numcpus 32 --source_type db5 --unbound

Generate externally-sourced features:

python3 project/datasets/builder/generatepsaiafeatures.py "$PSAIADIR" "$PROJDIR"/project/datasets/builder/psaiaconfigfiledb5.txt "$PROJDIR"/project/datasets/DB5/raw "$PROJDIR"/project/datasets/DB5/interim/parsed "$PROJDIR"/project/datasets/DB5/interim/parsed "$PROJDIR"/project/datasets/DB5/interim/externalfeats --sourcetype db5 python3 project/datasets/builder/generatehhsuitefeatures.py "$PROJDIR"/project/datasets/DB5/interim/parsed "$PROJDIR"/project/datasets/DB5/interim/parsed "$HHSUITEDB" "$PROJDIR"/project/datasets/DB5/interim/externalfeats --numcpujobs 4 --numcpusperjob 8 --numiter 2 --sourcetype db5 --write_file

Add new features to the filtered pairs:

python3 project/datasets/builder/postprocessprunedpairs.py "$PROJDIR"/project/datasets/DB5/raw "$PROJDIR"/project/datasets/DB5/interim/pairs "$PROJDIR"/project/datasets/DB5/interim/externalfeats "$PROJDIR"/project/datasets/DB5/final/raw --numcpus 32 --source_type db5

Partition dataset filenames, aggregate statistics, and impute missing features

python3 project/datasets/builder/partitiondatasetfilenames.py "$PROJDIR"/project/datasets/DB5/final/raw --sourcetype db5 --rank "$1" --size "$2" python3 project/datasets/builder/collectdatasetstatistics.py "$PROJDIR"/project/datasets/DB5/final/raw --rank "$1" --size "$2" python3 project/datasets/builder/logdatasetstatistics.py "$PROJDIR"/project/datasets/DB5/final/raw --rank "$1" --size "$2" python3 project/datasets/builder/imputemissingfeaturevalues.py "$PROJDIR"/project/datasets/DB5/final/raw --imputeatomfeatures False --advancedlogging False --numcpus 32 --rank "$1" --size "$2"

Optionally convert each postprocessed (final 'raw') complex into a pair of DGL graphs (final 'processed') with labels

python3 project/datasets/builder/convertcomplexestographs.py "$PROJDIR"/project/datasets/DB5/final/raw "$PROJDIR"/project/datasets/DB5/final/processed --numcpus 32 --edgedistcutoff 15.0 --edgelimit 5000 --selfloops True --rank "$1" --size "$2" ```

How to reassemble DIPS-Plus' "interim" external features

We split the (tar.gz) archive into eight separate parts with 'split -b 4096M interimexternalfeatsdips.tar.gz "interimexternalfeatsdips.tar.gz.part"' to upload it to the dataset's primary Zenodo record, so to recover the original archive:

```bash

Reassemble external features archive with 'cat'

cat interimexternalfeatsdips.tar.gz.parta* >interimexternalfeatsdips.tar.gz ```

Python 2 to 3 pickle file solution

While using Python 3 in this project, you may encounter the following error if you try to postprocess '.dill' pruned pairs that were created using Python 2.

ModuleNotFoundError: No module named 'dill.dill'

  1. To resolve it, ensure that the 'dill' package's version is greater than 0.3.2.
  2. If the problem persists, edit the pickle.py file corresponding to your Conda environment's Python 3 installation ( e.g. ~/DIPS-Plus/venv/lib/python3.8/pickle.py) and add the statement

python if module == 'dill.dill': module = 'dill._dill'

to the end of the

python if self.proto < 3 and self.fix_imports:

block in the Unpickler class' find_class() function (e.g. line 1577 of Python 3.8.5's pickle.py).

Citation

If you find DIPS-Plus useful in your research, please cite:

bibtex @article{morehead2023dips, title={DIPS-Plus: The enhanced database of interacting protein structures for interface prediction}, author={Morehead, Alex and Chen, Chen and Sedova, Ada and Cheng, Jianlin}, journal={Scientific Data}, volume={10}, number={1}, pages={509}, year={2023}, publisher={Nature Publishing Group UK London} }

Owner

  • Name: BioinfoMachineLearning
  • Login: BioinfoMachineLearning
  • Kind: organization

Citation (citation.bib)

@misc{morehead2021dipsplus,
      title={DIPS-Plus: The Enhanced Database of Interacting Protein Structures for Interface Prediction}, 
      author={Alex Morehead and Chen Chen and Ada Sedova and Jianlin Cheng},
      year={2021},
      eprint={2106.04362},
      archivePrefix={arXiv},
      primaryClass={q-bio.QM}
}

GitHub Events

Total
  • Issues event: 2
  • Watch event: 3
  • Push event: 2
Last Year
  • Issues event: 2
  • Watch event: 3
  • Push event: 2

Committers

Last synced: almost 3 years ago

All Time
  • Total Commits: 74
  • Total Committers: 2
  • Avg Commits per committer: 37.0
  • Development Distribution Score (DDS): 0.014
Top Committers
Name Email Commits
Alex Morehead a****d@g****m 73
#Adam Leach q****n@g****m 1

Issues and Pull Requests

Last synced: over 1 year ago

All Time
  • Total issues: 18
  • Total pull requests: 6
  • Average time to close issues: 22 days
  • Average time to close pull requests: 43 minutes
  • Total issue authors: 10
  • Total pull request authors: 3
  • Average comments per issue: 2.72
  • Average comments per pull request: 0.17
  • Merged pull requests: 6
  • Bot issues: 0
  • Bot pull requests: 1
Past Year
  • Issues: 3
  • Pull requests: 0
  • Average time to close issues: 17 days
  • Average time to close pull requests: N/A
  • Issue authors: 2
  • Pull request authors: 0
  • Average comments per issue: 2.67
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • XuBlack (6)
  • rubenalv (2)
  • anton-bushuiev (2)
  • OliviaViessmann (2)
  • onlyonewater (2)
  • aggelos-michael-papadopoulos (1)
  • lijiashan2020 (1)
  • zhenpingli (1)
  • orange2350 (1)
  • terry-r123 (1)
  • leiqian-temple (1)
  • octavian-ganea (1)
Pull Request Authors
  • amorehead (4)
  • qazwsxal (1)
  • dependabot[bot] (1)
Top Labels
Issue Labels
Pull Request Labels
dependencies (1)

Packages

  • Total packages: 1
  • Total downloads:
    • pypi 5 last-month
  • Total dependent packages: 0
  • Total dependent repositories: 1
  • Total versions: 11
  • Total maintainers: 1
pypi.org: dips-plus

The Enhanced Database of Interacting Protein Structures for Interface Prediction

  • Versions: 11
  • Dependent Packages: 0
  • Dependent Repositories: 1
  • Downloads: 5 Last month
Rankings
Dependent packages count: 10.1%
Stargazers count: 10.8%
Forks count: 12.6%
Average: 19.2%
Dependent repos count: 21.5%
Downloads: 41.0%
Maintainers (1)
Last synced: 6 months ago

Dependencies

setup.py pypi
  • Sphinx ==4.0.1
  • atom3-py3 ==0.1.9.9
  • click ==7.0.0
  • dill ==0.3.3
  • easy-parallel-py3 ==0.1.6.4
  • module *
  • mpi4py ==3.0.3
  • setuptools ==56.2.0
  • tqdm ==4.49.0
environment.yml pypi
  • absl-py ==1.4.0
  • aiohttp ==3.8.4
  • aiosignal ==1.3.1
  • alabaster ==0.7.13
  • async-timeout ==4.0.2
  • attrs ==23.1.0
  • babel ==2.12.1
  • beautifulsoup4 ==4.12.2
  • biopandas ==0.5.0.dev0
  • bioservices ==1.11.2
  • cachetools ==5.3.1
  • cattrs ==23.1.2
  • click ==7.0
  • colorlog ==6.7.0
  • configparser ==5.3.0
  • contourpy ==1.1.0
  • deepdiff ==6.3.1
  • dill ==0.3.3
  • docker-pycreds ==0.4.0
  • docutils ==0.17.1
  • easy-parallel-py3 ==0.1.6.4
  • easydev ==0.12.1
  • exceptiongroup ==1.1.2
  • fairscale ==0.4.0
  • fonttools ==4.40.0
  • frozenlist ==1.3.3
  • fsspec ==2023.5.0
  • future ==0.18.3
  • gevent ==22.10.2
  • gitdb ==4.0.10
  • gitpython ==3.1.31
  • google-auth ==2.19.0
  • google-auth-oauthlib ==1.0.0
  • greenlet ==2.0.2
  • grequests ==0.7.0
  • grpcio ==1.54.2
  • h5py ==3.8.0
  • hickle ==5.0.2
  • imagesize ==1.4.1
  • importlib-resources ==6.0.0
  • install ==1.3.5
  • jaxtyping ==0.2.19
  • jinja2 ==2.11.3
  • loguru ==0.7.0
  • looseversion ==1.1.2
  • lxml ==4.9.3
  • markdown ==3.4.3
  • markdown-it-py ==3.0.0
  • markupsafe ==1.1.1
  • matplotlib ==3.7.2
  • mdurl ==0.1.2
  • mmtf-python ==1.1.3
  • mpi4py ==3.0.3
  • msgpack ==1.0.5
  • multidict ==6.0.4
  • multipledispatch ==1.0.0
  • multiprocess ==0.70.11.1
  • numpy ==1.23.5
  • oauthlib ==3.2.2
  • ordered-set ==4.1.0
  • pathos ==0.2.7
  • pathtools ==0.1.2
  • pdb-tools ==2.5.0
  • platformdirs ==3.8.1
  • plotly ==5.15.0
  • pox ==0.3.2
  • ppft ==1.7.6.6
  • promise ==2.3
  • protobuf ==3.20.3
  • pyasn1 ==0.5.0
  • pyasn1-modules ==0.3.0
  • pydantic ==1.10.11
  • pydeprecate ==0.3.1
  • pyparsing ==3.0.9
  • pytorch-lightning ==1.4.8
  • pyyaml ==5.4.1
  • requests-cache ==1.1.0
  • requests-oauthlib ==1.3.1
  • rich ==13.4.2
  • rich-click ==1.6.1
  • rsa ==4.9
  • seaborn ==0.12.2
  • sentry-sdk ==1.24.0
  • shortuuid ==1.0.11
  • smmap ==5.0.0
  • snowballstemmer ==2.2.0
  • soupsieve ==2.4.1
  • subprocess32 ==3.5.4
  • suds-community ==1.1.2
  • tenacity ==8.2.2
  • tensorboard ==2.13.0
  • tensorboard-data-server ==0.7.0
  • termcolor ==2.3.0
  • torchmetrics ==0.5.1
  • typeguard ==4.0.0
  • url-normalize ==1.4.3
  • wandb ==0.12.2
  • werkzeug ==2.3.6
  • wget ==3.2
  • wrapt ==1.15.0
  • xarray ==2023.1.0
  • xmltodict ==0.13.0
  • yarl ==1.9.2
  • yaspin ==2.3.0
  • zope-event ==5.0
  • zope-interface ==6.0