transformers--clipseg

This is the code to train CLIPSeg based on hugging face transformers

https://github.com/weimengmeng1999/transformers--clipseg

Science Score: 44.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (8.8%) to scientific vocabulary
Last synced: 7 months ago · JSON representation ·

Repository

This is the code to train CLIPSeg based on hugging face transformers

Basic Info
  • Host: GitHub
  • Owner: weimengmeng1999
  • License: apache-2.0
  • Language: Python
  • Default Branch: main
  • Size: 10.9 MB
Statistics
  • Stars: 3
  • Watchers: 1
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created about 3 years ago · Last pushed about 3 years ago
Metadata Files
Readme Contributing License Code of conduct Citation

README.md

Transformers--CLIPSeg

This is the code to train CLIPSeg based on hugging face transformers

For the training file, please go to /examples/pytorch/contrastive-image-text/run_clipseg.py

Also some changes in modeling_Clipseg.py

To run Clipseg, please follow this code:

bash python examples/pytorch/contrastive-image-text/run_clipseg.py \ --output_dir "clipseg.." \ --model_name_or_path "CIDAS/clipseg-rd64-refined" \ --feature_extractor_name "CIDAS/clipseg-rd64-refined"\ --image_column "image_path" \ --caption_column "seg_class_name" \ --label_column "mask_path"\ --train_file "../train_instruments.json" \ --validation_file "../valid_instruments.json"" \ --test_file "../test_instruments.json"" \ --max_seq_length 77 \ --remove_unused_columns=False \ --do_train \ --per_device_train_batch_size 24 \ --per_device_eval_batch_size 24 \ --num_train_epochs 400 \ --learning_rate "5e-4" \ --warmup_steps 0 \ --weight_decay 0.1 \ --overwrite_output_dir \ --report_to none

CLIPSeg training summary

CLIPSeg is another model that we want to try to leverage the text/visual prompts to help with our instruments segmentation task. The CLIPSeg can be served for: 1) Referring Expression Segmentation; 2) Generalized Zero-Shot Segmentation; 3) One-Shot Semantic Segmentation

Experiment 1: Training CLIPSeg for EndoVis2017 with Text prompt only

Training stage input: - Query image (samples in EndoVis2017 training set) - Text prompt (segmentation class name/ segmentation class description) Experiment 1.1: Segmentation class name example: ["Bipolar Forceps"] Experiment 1.2: Segmentation class description example: [“Bipolar forceps with double-action fine curved jaws and horizontal serrations, made by medical grade stainless stell and Surgical grade material, includes a handle and a dark or grey plastic like cylindrical shaft, includes a complex robotic joint for connecting the jaws/handle to the shaft”]

Testing stage:

  • Input: sample in EndoVis2017 testing set; Text prompt
  • Output example (binary) for experiment 1.1: doesn’t work ☹

  • Output example (binary) for experiment 1.2: works but results are very similar to the pre-trained CLIPSeg

  • In EndoVis2017 testing set: Experiment 1.2: mean IOU= 79.92%

Experiment 2: Training CLIPSeg for EndoVis2017 with randomly mix text and visual support conditionals

Training stage:

  • Input:
  • Query image (samples in EndoVis2017 training set)
  • Text prompt (segmentation class description) Segmentation class description example is the same as described in experiment 1.2 -Visual prompt Using the visual prompting tips described in the paper, i.e. cropping the image and darkening the background.

Testing stage:

  • Input: sample in EndoVis2017 testing set; Text prompt

  • Output Example:

  • In EndoVis2017 testing set: Experiment 1.2: mean IOU= 81.92% (not much improvement)

Ongoing Experiment: Fine-tuning CLIP as well as training CLIPSeg decoder

Owner

  • Name: Meng.Wei
  • Login: weimengmeng1999
  • Kind: user
  • Location: London, UK
  • Company: Imperial College London

Citation (CITATION.cff)

cff-version: "1.2.0"
date-released: 2020-10
message: "If you use this software, please cite it using these metadata."
title: "Transformers: State-of-the-Art Natural Language Processing"
url: "https://github.com/huggingface/transformers"
authors: 
  - family-names: Wolf
    given-names: Thomas
  - family-names: Debut
    given-names: Lysandre
  - family-names: Sanh
    given-names: Victor
  - family-names: Chaumond
    given-names: Julien
  - family-names: Delangue
    given-names: Clement
  - family-names: Moi
    given-names: Anthony
  - family-names: Cistac
    given-names: Perric
  - family-names: Ma
    given-names: Clara
  - family-names: Jernite
    given-names: Yacine
  - family-names: Plu
    given-names: Julien
  - family-names: Xu
    given-names: Canwen
  - family-names: "Le Scao"
    given-names: Teven
  - family-names: Gugger
    given-names: Sylvain
  - family-names: Drame
    given-names: Mariama
  - family-names: Lhoest
    given-names: Quentin
  - family-names: Rush
    given-names: "Alexander M."
preferred-citation:
  type: conference-paper
  authors:
  - family-names: Wolf
    given-names: Thomas
  - family-names: Debut
    given-names: Lysandre
  - family-names: Sanh
    given-names: Victor
  - family-names: Chaumond
    given-names: Julien
  - family-names: Delangue
    given-names: Clement
  - family-names: Moi
    given-names: Anthony
  - family-names: Cistac
    given-names: Perric
  - family-names: Ma
    given-names: Clara
  - family-names: Jernite
    given-names: Yacine
  - family-names: Plu
    given-names: Julien
  - family-names: Xu
    given-names: Canwen
  - family-names: "Le Scao"
    given-names: Teven
  - family-names: Gugger
    given-names: Sylvain
  - family-names: Drame
    given-names: Mariama
  - family-names: Lhoest
    given-names: Quentin
  - family-names: Rush
    given-names: "Alexander M."
  booktitle: "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations"
  month: 10
  start: 38
  end: 45
  title: "Transformers: State-of-the-Art Natural Language Processing"
  year: 2020
  publisher: "Association for Computational Linguistics"
  url: "https://www.aclweb.org/anthology/2020.emnlp-demos.6"
  address: "Online"

GitHub Events

Total
  • Watch event: 1
Last Year
  • Watch event: 1

Issues and Pull Requests

Last synced: almost 2 years ago

All Time
  • Total issues: 0
  • Total pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Total issue authors: 0
  • Total pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels

Dependencies

docker/transformers-all-latest-gpu/Dockerfile docker
  • nvidia/cuda 11.2.2-cudnn8-devel-ubuntu20.04 build
docker/transformers-cpu/Dockerfile docker
  • ubuntu 18.04 build
docker/transformers-doc-builder/Dockerfile docker
  • python 3.8 build
docker/transformers-gpu/Dockerfile docker
  • nvidia/cuda 10.2-cudnn7-devel-ubuntu18.04 build
docker/transformers-past-gpu/Dockerfile docker
  • $BASE_DOCKER_IMAGE latest build
docker/transformers-pytorch-cpu/Dockerfile docker
  • ubuntu 18.04 build
docker/transformers-pytorch-deepspeed-latest-gpu/Dockerfile docker
  • nvcr.io/nvidia/pytorch 21.03-py3 build
docker/transformers-pytorch-deepspeed-nightly-gpu/Dockerfile docker
  • nvcr.io/nvidia/pytorch 21.03-py3 build
docker/transformers-pytorch-gpu/Dockerfile docker
  • nvidia/cuda 11.2.2-cudnn8-devel-ubuntu20.04 build
docker/transformers-pytorch-tpu/Dockerfile docker
  • google/cloud-sdk slim build
docker/transformers-tensorflow-cpu/Dockerfile docker
  • ubuntu 18.04 build
docker/transformers-tensorflow-gpu/Dockerfile docker
  • nvidia/cuda 11.2.2-cudnn8-devel-ubuntu20.04 build
examples/research_projects/quantization-qdqbert/Dockerfile docker
  • nvcr.io/nvidia/pytorch 22.02-py3 build
examples/flax/_tests_requirements.txt pypi
  • conllu * test
  • datasets >=1.1.3 test
  • evaluate >=0.2.0 test
  • nltk * test
  • pytest * test
  • rouge-score * test
  • seqeval * test
  • tensorboard * test
examples/flax/language-modeling/requirements.txt pypi
  • datasets >=1.1.3
  • flax >=0.3.5
  • jax >=0.2.8
  • jaxlib >=0.1.59
  • optax >=0.0.9
examples/flax/question-answering/requirements.txt pypi
  • datasets >=1.8.0
  • flax >=0.3.5
  • jax >=0.2.17
  • jaxlib >=0.1.68
  • optax >=0.0.8
examples/flax/summarization/requirements.txt pypi
  • datasets >=1.1.3
  • evaluate >=0.2.0
  • flax >=0.3.5
  • jax >=0.2.8
  • jaxlib >=0.1.59
  • optax >=0.0.8
examples/flax/text-classification/requirements.txt pypi
  • datasets >=1.1.3
  • flax >=0.3.5
  • jax >=0.2.8
  • jaxlib >=0.1.59
  • optax >=0.0.8
examples/flax/token-classification/requirements.txt pypi
  • datasets >=1.8.0
  • flax >=0.3.5
  • jax >=0.2.8
  • jaxlib >=0.1.59
  • optax >=0.0.8
  • seqeval *
examples/flax/vision/requirements.txt pypi
  • flax >=0.3.5
  • jax >=0.2.8
  • jaxlib >=0.1.59
  • optax >=0.0.8
  • torch ==1.9.0
  • torchvision ==0.10.0
examples/legacy/pytorch-lightning/requirements.txt pypi
  • conllu *
  • datasets >=1.1.3
  • elasticsearch *
  • faiss-cpu *
  • fire *
  • git-python ==1.0.3
  • matplotlib *
  • nltk *
  • pandas *
  • protobuf *
  • psutil *
  • pytest *
  • ray *
  • rouge-score *
  • sacrebleu *
  • scikit-learn *
  • sentencepiece *
  • seqeval *
  • streamlit *
  • tensorboard *
  • tensorflow_datasets *
examples/legacy/seq2seq/requirements.txt pypi
  • conllu *
  • datasets >=1.1.3
  • elasticsearch *
  • faiss-cpu *
  • fire *
  • git-python ==1.0.3
  • matplotlib *
  • nltk *
  • pandas *
  • protobuf *
  • psutil *
  • pytest *
  • rouge-score *
  • sacrebleu *
  • scikit-learn *
  • sentencepiece *
  • seqeval *
  • streamlit *
  • tensorboard *
  • tensorflow_datasets *
examples/pytorch/_tests_requirements.txt pypi
  • accelerate main test
  • conllu * test
  • datasets >=1.13.3 test
  • elasticsearch * test
  • evaluate >=0.2.0 test
  • faiss-cpu * test
  • fire * test
  • git-python ==1.0.3 test
  • jiwer * test
  • librosa * test
  • matplotlib * test
  • nltk * test
  • pandas * test
  • protobuf * test
  • psutil * test
  • pytest * test
  • rouge-score * test
  • sacrebleu >=1.4.12 test
  • scikit-learn * test
  • sentencepiece * test
  • seqeval * test
  • streamlit * test
  • tensorboard * test
  • tensorflow_datasets * test
  • torchvision * test
examples/pytorch/audio-classification/requirements.txt pypi
  • datasets >=1.14.0
  • evaluate *
  • librosa *
  • torch >=1.6
  • torchaudio *
examples/pytorch/benchmarking/requirements.txt pypi
  • torch >=1.3
examples/pytorch/contrastive-image-text/requirements.txt pypi
  • datasets >=1.8.0
  • torch >=1.5.0
  • torchvision >=0.6.0
examples/pytorch/image-classification/requirements.txt pypi
  • datasets >=1.17.0
  • evaluate *
  • torch >=1.5.0
  • torchvision >=0.6.0
examples/pytorch/image-pretraining/requirements.txt pypi
  • datasets >=1.8.0
  • torch >=1.5.0
  • torchvision >=0.6.0
examples/pytorch/language-modeling/requirements.txt pypi
  • accelerate *
  • datasets >=1.8.0
  • evaluate *
  • protobuf *
  • sentencepiece *
  • torch >=1.3
examples/pytorch/multiple-choice/requirements.txt pypi
  • accelerate *
  • evaluate *
  • protobuf *
  • sentencepiece *
  • torch >=1.3
examples/pytorch/question-answering/requirements.txt pypi
  • accelerate *
  • datasets >=1.8.0
  • evaluate *
  • torch >=1.3.0
examples/pytorch/semantic-segmentation/requirements.txt pypi
  • datasets >=2.0.0
  • evaluate *
  • torch >=1.3
examples/pytorch/speech-pretraining/requirements.txt pypi
  • accelerate >=0.5.0
  • datasets >=1.12.0
  • librosa *
  • torch >=1.5
  • torchaudio *
examples/pytorch/speech-recognition/requirements.txt pypi
  • datasets >=1.18.0
  • evaluate *
  • jiwer *
  • librosa *
  • torch >=1.5
  • torchaudio *
examples/pytorch/summarization/requirements.txt pypi
  • accelerate *
  • datasets >=1.8.0
  • evaluate *
  • nltk *
  • protobuf *
  • py7zr *
  • rouge-score *
  • sentencepiece *
  • torch >=1.3
examples/pytorch/text-classification/requirements.txt pypi
  • accelerate *
  • datasets >=1.8.0
  • evaluate *
  • protobuf *
  • scikit-learn *
  • scipy *
  • sentencepiece *
  • torch >=1.3
examples/pytorch/text-generation/requirements.txt pypi
  • protobuf *
  • sentencepiece *
  • torch >=1.3
examples/pytorch/token-classification/requirements.txt pypi
  • accelerate *
  • datasets >=1.8.0
  • evaluate *
  • seqeval *
  • torch >=1.3
examples/pytorch/translation/requirements.txt pypi
  • accelerate *
  • datasets >=1.8.0
  • evaluate *
  • protobuf *
  • py7zr *
  • sacrebleu >=1.4.12
  • sentencepiece *
  • torch >=1.3
examples/research_projects/adversarial/requirements.txt pypi
  • transformers ==3.5.1
examples/research_projects/bert-loses-patience/requirements.txt pypi
  • transformers ==3.5.1
examples/research_projects/bertabs/requirements.txt pypi
  • nltk *
  • py-rouge *
  • transformers ==3.5.1
examples/research_projects/bertology/requirements.txt pypi
  • transformers ==3.5.1
examples/research_projects/codeparrot/examples/requirements.txt pypi
  • datasets ==2.3.2
  • evaluate ==0.2.2
  • scikit-learn ==1.1.2
  • transformers ==4.21.1
  • wandb ==0.13.1
examples/research_projects/codeparrot/requirements.txt pypi
  • datasets ==1.16.0
  • datasketch ==1.5.7
  • dpu_utils *
  • huggingface-hub ==0.1.0
  • tensorboard ==2.6.0
  • torch ==1.11.0
  • transformers ==4.19.0
  • wandb ==0.12.0
examples/research_projects/decision_transformer/requirements.txt pypi
  • APScheduler ==3.9.1
  • Brotli ==1.0.9
  • Cython ==0.29.28
  • Deprecated ==1.2.13
  • Flask ==2.0.3
  • Flask-Compress ==1.11
  • GitPython ==3.1.18
  • Jinja2 ==2.11.3
  • Keras-Preprocessing ==1.1.2
  • Mako ==1.2.2
  • Markdown ==3.3.6
  • MarkupSafe ==1.1.1
  • Pillow ==9.0.1
  • Pint ==0.16.1
  • PyYAML ==6.0
  • Pygments ==2.11.2
  • SQLAlchemy ==1.4.32
  • SoundFile ==0.10.3.post1
  • Werkzeug ==2.0.3
  • absl-py ==1.0.0
  • aiohttp ==3.8.1
  • aiosignal ==1.2.0
  • alembic ==1.7.7
  • appdirs ==1.4.4
  • arrow ==1.2.2
  • asttokens ==2.0.5
  • astunparse ==1.6.3
  • async-timeout ==4.0.2
  • attrs ==21.4.0
  • audioread ==2.1.9
  • autopage ==0.5.0
  • backcall ==0.2.0
  • backoff ==1.11.1
  • backports.zoneinfo ==0.2.1
  • binaryornot ==0.4.4
  • black ==22.1.0
  • boto3 ==1.16.34
  • botocore ==1.19.63
  • cachetools ==5.0.0
  • certifi ==2021.10.8
  • cffi ==1.15.0
  • chardet ==4.0.0
  • charset-normalizer ==2.0.12
  • chex ==0.1.1
  • click ==8.0.4
  • cliff ==3.10.1
  • clldutils ==3.11.1
  • cloudpickle ==2.0.0
  • cmaes ==0.8.2
  • cmd2 ==2.4.0
  • codecarbon ==1.2.0
  • colorlog ==6.6.0
  • cookiecutter ==2.1.1
  • cryptography ==36.0.2
  • csvw ==2.0.0
  • cycler ==0.11.0
  • dash ==2.3.0
  • dash-bootstrap-components ==1.0.3
  • dash-core-components ==2.0.0
  • dash-html-components ==2.0.0
  • dash-table ==5.0.0
  • datasets ==2.0.0
  • decorator ==5.1.1
  • dill ==0.3.4
  • dlinfo ==1.2.1
  • dm-tree ==0.1.6
  • docker ==4.4.4
  • execnet ==1.9.0
  • executing ==0.8.3
  • faiss-cpu ==1.7.2
  • fasteners ==0.17.3
  • filelock ==3.6.0
  • fire ==0.4.0
  • flake8 ==4.0.1
  • flatbuffers ==2.0
  • flax ==0.4.0
  • fonttools ==4.31.1
  • frozenlist ==1.3.0
  • fsspec ==2022.2.0
  • fugashi ==1.1.2
  • gast ==0.5.3
  • gitdb ==4.0.9
  • glfw ==2.5.1
  • google-auth ==2.6.2
  • google-auth-oauthlib ==0.4.6
  • google-pasta ==0.2.0
  • greenlet ==1.1.2
  • grpcio ==1.44.0
  • gym ==0.23.1
  • gym-notices ==0.0.6
  • h5py ==3.6.0
  • huggingface-hub ==0.4.0
  • hypothesis ==6.39.4
  • idna ==3.3
  • imageio ==2.16.1
  • importlib-metadata ==4.11.3
  • importlib-resources ==5.4.0
  • iniconfig ==1.1.1
  • ipadic ==1.0.0
  • ipython ==8.1.1
  • isodate ==0.6.1
  • isort ==5.10.1
  • itsdangerous ==2.1.1
  • jax ==0.3.4
  • jaxlib ==0.3.2
  • jedi ==0.18.1
  • jinja2-time ==0.2.0
  • jmespath ==0.10.0
  • joblib ==1.2.0
  • jsonschema ==4.4.0
  • keras ==2.8.0
  • kiwisolver ==1.4.0
  • kubernetes ==12.0.1
  • libclang ==13.0.0
  • librosa ==0.9.1
  • llvmlite ==0.38.0
  • matplotlib ==3.5.1
  • matplotlib-inline ==0.1.3
  • mccabe ==0.6.1
  • msgpack ==1.0.3
  • mujoco-py ==2.1.2.14
  • multidict ==6.0.2
  • multiprocess ==0.70.12.2
  • mypy-extensions ==0.4.3
  • nltk ==3.7
  • numba ==0.55.1
  • numpy ==1.22.3
  • oauthlib ==3.2.1
  • onnx ==1.11.0
  • onnxconverter-common ==1.9.0
  • opt-einsum ==3.3.0
  • optax ==0.1.1
  • optuna ==2.10.0
  • packaging ==21.3
  • pandas ==1.4.1
  • parameterized ==0.8.1
  • parso ==0.8.3
  • pathspec ==0.9.0
  • pbr ==5.8.1
  • pexpect ==4.8.0
  • phonemizer ==3.0.1
  • pickleshare ==0.7.5
  • plac ==1.3.4
  • platformdirs ==2.5.1
  • plotly ==5.6.0
  • pluggy ==1.0.0
  • pooch ==1.6.0
  • portalocker ==2.0.0
  • poyo ==0.5.0
  • prettytable ==3.2.0
  • prompt-toolkit ==3.0.28
  • protobuf ==3.19.5
  • psutil ==5.9.0
  • ptyprocess ==0.7.0
  • pure-eval ==0.2.2
  • py ==1.11.0
  • py-cpuinfo ==8.0.0
  • pyOpenSSL ==22.0.0
  • pyarrow ==7.0.0
  • pyasn1 ==0.4.8
  • pyasn1-modules ==0.2.8
  • pycodestyle ==2.8.0
  • pycparser ==2.21
  • pyctcdecode ==0.3.0
  • pyflakes ==2.4.0
  • pygtrie ==2.4.2
  • pynvml ==11.4.1
  • pyparsing ==3.0.7
  • pyperclip ==1.8.2
  • pypng ==0.0.21
  • pyrsistent ==0.18.1
  • pytest ==7.1.1
  • pytest-forked ==1.4.0
  • pytest-timeout ==2.1.0
  • pytest-xdist ==2.5.0
  • python-dateutil ==2.8.2
  • python-slugify ==6.1.1
  • pytz ==2022.1
  • pytz-deprecation-shim ==0.1.0.post0
  • ray ==1.11.0
  • redis ==4.1.4
  • regex ==2022.3.15
  • requests ==2.27.1
  • requests-oauthlib ==1.3.1
  • resampy ==0.2.2
  • responses ==0.18.0
  • rfc3986 ==1.5.0
  • rouge-score ==0.0.4
  • rsa ==4.8
  • s3transfer ==0.3.7
  • sacrebleu ==1.5.1
  • sacremoses ==0.0.49
  • scikit-learn ==1.0.2
  • scipy ==1.8.0
  • segments ==2.2.0
  • sentencepiece ==0.1.96
  • sigopt ==8.2.0
  • six ==1.16.0
  • smmap ==5.0.0
  • sortedcontainers ==2.4.0
  • stack-data ==0.2.0
  • stevedore ==3.5.0
  • tabulate ==0.8.9
  • tenacity ==8.0.1
  • tensorboard ==2.8.0
  • tensorboard-data-server ==0.6.1
  • tensorboard-plugin-wit ==1.8.1
  • tensorboardX ==2.5
  • tensorflow ==2.8.1
  • tensorflow-io-gcs-filesystem ==0.24.0
  • termcolor ==1.1.0
  • text-unidecode ==1.3
  • tf-estimator-nightly ==2.8.0.dev2021122109
  • tf2onnx ==1.9.3
  • threadpoolctl ==3.1.0
  • timeout-decorator ==0.5.0
  • timm ==0.5.4
  • tokenizers ==0.11.6
  • tomli ==2.0.1
  • toolz ==0.11.2
  • torch ==1.11.0
  • torchaudio ==0.11.0
  • torchvision ==0.12.0
  • tqdm ==4.63.0
  • traitlets ==5.1.1
  • typing-extensions ==4.1.1
  • tzdata ==2022.1
  • tzlocal ==4.1
  • unidic ==1.1.0
  • unidic-lite ==1.0.8
  • uritemplate ==4.1.1
  • urllib3 ==1.26.9
  • wasabi ==0.9.0
  • wcwidth ==0.2.5
  • websocket-client ==1.3.1
  • wrapt ==1.14.0
  • xxhash ==3.0.0
  • yarl ==1.7.2
  • zipp ==3.7.0
examples/research_projects/deebert/requirements.txt pypi
  • transformers ==3.5.1
examples/research_projects/distillation/requirements.txt pypi
  • gitpython ==3.0.2
  • psutil ==5.6.6
  • scipy >=1.4.1
  • tensorboard >=1.14.0
  • tensorboardX ==1.8
  • transformers *
examples/research_projects/fsner/pyproject.toml pypi
examples/research_projects/fsner/requirements.txt pypi
  • transformers >=4.9.2
examples/research_projects/fsner/setup.py pypi
  • torch >=1.9.0
examples/research_projects/information-gain-filtration/requirements.txt pypi
  • joblib >=0.13.2
  • matplotlib *
  • numpy >=1.17.2
  • scipy *
  • torch >=1.10.1
  • transformers >=3.5
examples/research_projects/jax-projects/big_bird/requirements.txt pypi
  • datasets *
  • flax *
  • jsonlines *
  • sentencepiece *
  • wandb *
examples/research_projects/jax-projects/hybrid_clip/requirements.txt pypi
  • flax >=0.3.5
  • jax >=0.2.8
  • jaxlib >=0.1.59
  • optax >=0.0.8
  • torch ==1.9.0
  • torchvision ==0.10.0
examples/research_projects/layoutlmv3/requirements.txt pypi
  • datasets *
  • pillow *
  • seqeval *
examples/research_projects/longform-qa/requirements.txt pypi
  • datasets >=1.1.3
  • elasticsearch *
  • faiss-cpu *
  • streamlit *
examples/research_projects/lxmert/requirements.txt pypi
  • CacheControl ==0.12.6
  • Jinja2 >=2.11.3
  • MarkupSafe ==1.1.1
  • Pillow >=8.1.1
  • PyYAML >=5.4
  • Pygments >=2.7.4
  • QtPy ==1.9.0
  • Send2Trash ==1.5.0
  • appdirs ==1.4.3
  • argon2-cffi ==20.1.0
  • async-generator ==1.10
  • attrs ==20.2.0
  • backcall ==0.2.0
  • certifi ==2020.6.20
  • cffi ==1.14.2
  • chardet ==3.0.4
  • click ==7.1.2
  • colorama ==0.4.3
  • contextlib2 ==0.6.0
  • cycler ==0.10.0
  • datasets ==1.0.0
  • decorator ==4.4.2
  • defusedxml ==0.6.0
  • dill ==0.3.2
  • distlib ==0.3.0
  • distro ==1.4.0
  • entrypoints ==0.3
  • filelock ==3.0.12
  • future ==0.18.2
  • html5lib ==1.0.1
  • idna ==2.8
  • ipaddr ==2.2.0
  • ipykernel ==5.3.4
  • ipython *
  • ipython-genutils ==0.2.0
  • ipywidgets ==7.5.1
  • jedi ==0.17.2
  • joblib ==1.2.0
  • jsonschema ==3.2.0
  • jupyter ==1.0.0
  • jupyter-client ==6.1.7
  • jupyter-console ==6.2.0
  • jupyter-core ==4.6.3
  • jupyterlab-pygments ==0.1.1
  • kiwisolver ==1.2.0
  • lockfile ==0.12.2
  • matplotlib ==3.3.1
  • mistune ==2.0.3
  • msgpack ==0.6.2
  • nbclient ==0.5.0
  • nbconvert ==6.5.1
  • nbformat ==5.0.7
  • nest-asyncio ==1.4.0
  • notebook ==6.4.12
  • numpy ==1.22.0
  • opencv-python ==4.4.0.42
  • packaging ==20.3
  • pandas ==1.1.2
  • pandocfilters ==1.4.2
  • parso ==0.7.1
  • pep517 ==0.8.2
  • pexpect ==4.8.0
  • pickleshare ==0.7.5
  • progress ==1.5
  • prometheus-client ==0.8.0
  • prompt-toolkit ==3.0.7
  • ptyprocess ==0.6.0
  • pyaml ==20.4.0
  • pyarrow ==1.0.1
  • pycparser ==2.20
  • pyparsing ==2.4.6
  • pyrsistent ==0.16.0
  • python-dateutil ==2.8.1
  • pytoml ==0.1.21
  • pytz ==2020.1
  • pyzmq ==19.0.2
  • qtconsole ==4.7.7
  • regex ==2020.7.14
  • requests ==2.22.0
  • retrying ==1.3.3
  • sacremoses ==0.0.43
  • sentencepiece ==0.1.91
  • six ==1.14.0
  • terminado ==0.8.3
  • testpath ==0.4.4
  • tokenizers ==0.8.1rc2
  • torch ==1.6.0
  • torchvision ==0.7.0
  • tornado ==6.0.4
  • tqdm ==4.48.2
  • traitlets *
  • urllib3 ==1.26.5
  • wcwidth ==0.2.5
  • webencodings ==0.5.1
  • wget ==3.2
  • widgetsnbextension ==3.5.1
  • xxhash ==2.0.0
examples/research_projects/mlm_wwm/requirements.txt pypi
  • datasets >=1.1.3
  • ltp *
  • protobuf *
  • sentencepiece *
examples/research_projects/movement-pruning/requirements.txt pypi
  • h5py >=2.10.0
  • knockknock >=0.1.8.1
  • numpy >=1.18.2
  • scipy >=1.4.1
  • torch >=1.4.0
examples/research_projects/onnx/summarization/requirements.txt pypi
  • torch >=1.10
examples/research_projects/pplm/requirements.txt pypi
  • conllu *
  • datasets >=1.1.3
  • elasticsearch *
  • faiss-cpu *
  • fire *
  • git-python ==1.0.3
  • matplotlib *
  • nltk *
  • pandas *
  • protobuf *
  • psutil *
  • pytest *
  • pytorch-lightning *
  • rouge-score *
  • sacrebleu *
  • scikit-learn *
  • sentencepiece *
  • seqeval *
  • streamlit *
  • tensorboard *
  • tensorflow_datasets *
  • transformers ==3.5.1
examples/research_projects/rag/requirements.txt pypi
  • GitPython *
  • datasets >=1.0.1
  • faiss-cpu >=1.6.3
  • psutil >=5.7.0
  • pytorch-lightning >=1.5.10
  • ray >=1.10.0
  • torch >=1.4.0
  • transformers *
examples/research_projects/rag-end2end-retriever/requirements.txt pypi
  • datasets *
  • faiss-cpu >=1.7.2
  • nvidia-ml-py3 ==7.352.0
  • psutil >=5.9.1
  • pytorch-lightning ==1.6.4
  • ray >=1.13.0
  • torch >=1.11.0
examples/research_projects/self-training-text-classification/requirements.txt pypi
  • accelerate *
  • datasets >=1.8.0
  • protobuf *
  • scikit-learn *
  • scipy *
  • sentencepiece *
  • torch >=1.3
examples/research_projects/seq2seq-distillation/requirements.txt pypi
  • conllu *
  • datasets >=1.1.3
  • elasticsearch *
  • faiss-cpu *
  • fire *
  • git-python ==1.0.3
  • matplotlib *
  • nltk *
  • pandas *
  • protobuf *
  • psutil *
  • pytest *
  • pytorch-lightning *
  • rouge-score *
  • sacrebleu *
  • scikit-learn *
  • sentencepiece *
  • streamlit *
  • tensorboard *
  • tensorflow_datasets *
examples/research_projects/tapex/requirements.txt pypi
  • datasets *
  • nltk *
  • numpy *
  • pandas *
examples/research_projects/visual_bert/requirements.txt pypi
  • CacheControl ==0.12.6
  • Jinja2 >=2.11.3
  • MarkupSafe ==1.1.1
  • Pillow >=8.1.1
  • PyYAML >=5.4
  • Pygments >=2.7.4
  • QtPy ==1.9.0
  • Send2Trash ==1.5.0
  • appdirs ==1.4.3
  • argon2-cffi ==20.1.0
  • async-generator ==1.10
  • attrs ==20.2.0
  • backcall ==0.2.0
  • certifi ==2020.6.20
  • cffi ==1.14.2
  • chardet ==3.0.4
  • click ==7.1.2
  • colorama ==0.4.3
  • contextlib2 ==0.6.0
  • cycler ==0.10.0
  • datasets ==1.0.0
  • decorator ==4.4.2
  • defusedxml ==0.6.0
  • dill ==0.3.2
  • distlib ==0.3.0
  • distro ==1.4.0
  • entrypoints ==0.3
  • filelock ==3.0.12
  • future ==0.18.2
  • html5lib ==1.0.1
  • idna ==2.8
  • ipaddr ==2.2.0
  • ipykernel ==5.3.4
  • ipython *
  • ipython-genutils ==0.2.0
  • ipywidgets ==7.5.1
  • jedi ==0.17.2
  • joblib ==1.2.0
  • jsonschema ==3.2.0
  • jupyter ==1.0.0
  • jupyter-client ==6.1.7
  • jupyter-console ==6.2.0
  • jupyter-core ==4.6.3
  • jupyterlab-pygments ==0.1.1
  • kiwisolver ==1.2.0
  • lockfile ==0.12.2
  • matplotlib ==3.3.1
  • mistune ==2.0.3
  • msgpack ==0.6.2
  • nbclient ==0.5.0
  • nbconvert ==6.5.1
  • nbformat ==5.0.7
  • nest-asyncio ==1.4.0
  • notebook ==6.4.12
  • numpy ==1.22.0
  • opencv-python ==4.4.0.42
  • packaging ==20.3
  • pandas ==1.1.2
  • pandocfilters ==1.4.2
  • parso ==0.7.1
  • pep517 ==0.8.2
  • pexpect ==4.8.0
  • pickleshare ==0.7.5
  • progress ==1.5
  • prometheus-client ==0.8.0
  • prompt-toolkit ==3.0.7
  • ptyprocess ==0.6.0
  • pyaml ==20.4.0
  • pyarrow ==1.0.1
  • pycparser ==2.20
  • pyparsing ==2.4.6
  • pyrsistent ==0.16.0
  • python-dateutil ==2.8.1
  • pytoml ==0.1.21
  • pytz ==2020.1
  • pyzmq ==19.0.2
  • qtconsole ==4.7.7
  • regex ==2020.7.14
  • requests ==2.22.0
  • retrying ==1.3.3
  • sacremoses ==0.0.43
  • sentencepiece ==0.1.91
  • six ==1.14.0
  • terminado ==0.8.3
  • testpath ==0.4.4
  • tokenizers ==0.8.1rc2
  • torch ==1.6.0
  • torchvision ==0.7.0
  • tornado ==6.0.4
  • tqdm ==4.48.2
  • traitlets *
  • urllib3 ==1.26.5
  • wcwidth ==0.2.5
  • webencodings ==0.5.1
  • wget ==3.2
  • widgetsnbextension ==3.5.1
  • xxhash ==2.0.0
examples/research_projects/wav2vec2/requirements.txt pypi
  • datasets *
  • jiwer ==2.2.0
  • lang-trans ==0.6.0
  • librosa ==0.8.0
  • torch >=1.5.0
  • torchaudio *
  • transformers *
examples/research_projects/xtreme-s/requirements.txt pypi
  • datasets >=1.18.0
  • jiwer *
  • librosa *
  • torch >=1.5
  • torchaudio *
examples/tensorflow/_tests_requirements.txt pypi
  • accelerate main test
  • conllu * test
  • datasets >=1.13.3 test
  • elasticsearch * test
  • evaluate >=0.2.0 test
  • faiss-cpu * test
  • fire * test
  • git-python ==1.0.3 test
  • jiwer * test
  • librosa * test
  • matplotlib * test
  • nltk * test
  • pandas * test
  • protobuf * test
  • psutil * test
  • pytest * test
  • rouge-score * test
  • sacrebleu >=1.4.12 test
  • scikit-learn * test
  • sentencepiece * test
  • seqeval * test
  • streamlit * test
  • tensorboard * test
  • tensorflow * test
  • tensorflow_datasets * test
examples/tensorflow/benchmarking/requirements.txt pypi
  • tensorflow >=2.3
examples/tensorflow/language-modeling/requirements.txt pypi
  • datasets >=1.8.0
  • sentencepiece *
examples/tensorflow/multiple-choice/requirements.txt pypi
  • protobuf *
  • sentencepiece *
  • tensorflow >=2.3
examples/tensorflow/question-answering/requirements.txt pypi
  • datasets >=1.4.0
  • evaluate >=0.2.0
  • tensorflow >=2.3.0
examples/tensorflow/summarization/requirements.txt pypi
  • datasets >=1.4.0
  • evaluate >=0.2.0
  • tensorflow >=2.3.0
examples/tensorflow/text-classification/requirements.txt pypi
  • datasets >=1.1.3
  • evaluate >=0.2.0
  • protobuf *
  • sentencepiece *
  • tensorflow >=2.3
examples/tensorflow/token-classification/requirements.txt pypi
  • datasets >=1.4.0
  • evaluate >=0.2.0
  • tensorflow >=2.3.0
examples/tensorflow/translation/requirements.txt pypi
  • datasets >=1.4.0
  • evaluate >=0.2.0
  • tensorflow >=2.3.0
pyproject.toml pypi
setup.py pypi
  • deps *
tests/sagemaker/scripts/pytorch/requirements.txt pypi
  • datasets ==1.8.0 test
tests/sagemaker/scripts/tensorflow/requirements.txt pypi