transformers

🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.

https://github.com/huggingface/transformers

Science Score: 64.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: zenodo.org
  • Committers with academic emails
    115 of 3065 committers (3.8%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (11.8%) to scientific vocabulary

Keywords

audio deep-learning deepseek gemma glm hacktoberfest llm machine-learning model-hub natural-language-processing nlp pretrained-models python pytorch pytorch-transformers qwen speech-recognition transformer vlm

Keywords from Contributors

cryptocurrency jax cryptography speech dataset-hub agents transformers gemini langchain language-model
Last synced: 4 months ago · JSON representation ·

Repository

🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.

Basic Info
Statistics
  • Stars: 149,048
  • Watchers: 1,165
  • Forks: 30,228
  • Open Issues: 1,960
  • Releases: 225
Topics
audio deep-learning deepseek gemma glm hacktoberfest llm machine-learning model-hub natural-language-processing nlp pretrained-models python pytorch pytorch-transformers qwen speech-recognition transformer vlm
Created about 7 years ago · Last pushed 4 months ago
Metadata Files
Readme Contributing License Code of conduct Citation Security Agents

README.md

Hugging Face Transformers Library

Checkpoints on Hub Build GitHub Documentation GitHub release Contributor Covenant DOI

English | 简体中文 | 繁體中文 | 한국어 | Español | 日本語 | हिन्दी | Русский | Português | తెలుగు | Français | Deutsch | Tiếng Việt | العربية | اردو |

State-of-the-art pretrained models for inference and training

Transformers acts as the model-definition framework for state-of-the-art machine learning models in text, computer vision, audio, video, and multimodal model, for both inference and training.

It centralizes the model definition so that this definition is agreed upon across the ecosystem. transformers is the pivot across frameworks: if a model definition is supported, it will be compatible with the majority of training frameworks (Axolotl, Unsloth, DeepSpeed, FSDP, PyTorch-Lightning, ...), inference engines (vLLM, SGLang, TGI, ...), and adjacent modeling libraries (llama.cpp, mlx, ...) which leverage the model definition from transformers.

We pledge to help support new state-of-the-art models and democratize their usage by having their model definition be simple, customizable, and efficient.

There are over 1M+ Transformers model checkpoints on the Hugging Face Hub you can use.

Explore the Hub today to find a model and use Transformers to help you get started right away.

Installation

Transformers works with Python 3.9+ PyTorch 2.1+, TensorFlow 2.6+, and Flax 0.4.1+.

Create and activate a virtual environment with venv or uv, a fast Rust-based Python package and project manager.

```py

venv

python -m venv .my-env source .my-env/bin/activate

uv

uv venv .my-env source .my-env/bin/activate ```

Install Transformers in your virtual environment.

```py

pip

pip install "transformers[torch]"

uv

uv pip install "transformers[torch]" ```

Install Transformers from source if you want the latest changes in the library or are interested in contributing. However, the latest version may not be stable. Feel free to open an issue if you encounter an error.

```shell git clone https://github.com/huggingface/transformers.git cd transformers

pip

pip install .[torch]

uv

uv pip install .[torch] ```

Quickstart

Get started with Transformers right away with the Pipeline API. The Pipeline is a high-level inference class that supports text, audio, vision, and multimodal tasks. It handles preprocessing the input and returns the appropriate output.

Instantiate a pipeline and specify model to use for text generation. The model is downloaded and cached so you can easily reuse it again. Finally, pass some text to prompt the model.

```py from transformers import pipeline

pipeline = pipeline(task="text-generation", model="Qwen/Qwen2.5-1.5B") pipeline("the secret to baking a really good cake is ") [{'generated_text': 'the secret to baking a really good cake is 1) to use the right ingredients and 2) to follow the recipe exactly. the recipe for the cake is as follows: 1 cup of sugar, 1 cup of flour, 1 cup of milk, 1 cup of butter, 1 cup of eggs, 1 cup of chocolate chips. if you want to make 2 cakes, how much sugar do you need? To make 2 cakes, you will need 2 cups of sugar.'}] ```

To chat with a model, the usage pattern is the same. The only difference is you need to construct a chat history (the input to Pipeline) between you and the system.

[!TIP] You can also chat with a model directly from the command line. shell transformers chat Qwen/Qwen2.5-0.5B-Instruct

```py import torch from transformers import pipeline

chat = [ {"role": "system", "content": "You are a sassy, wise-cracking robot as imagined by Hollywood circa 1986."}, {"role": "user", "content": "Hey, can you tell me any fun things to do in New York?"} ]

pipeline = pipeline(task="text-generation", model="meta-llama/Meta-Llama-3-8B-Instruct", dtype=torch.bfloat16, devicemap="auto") response = pipeline(chat, maxnew_tokens=512) print(response[0]["generated_text"][-1]["content"]) ```

Expand the examples below to see how Pipeline works for different modalities and tasks.

Automatic speech recognition ```py from transformers import pipeline pipeline = pipeline(task="automatic-speech-recognition", model="openai/whisper-large-v3") pipeline("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac") {'text': ' I have a dream that one day this nation will rise up and live out the true meaning of its creed.'} ```
Image classification

```py from transformers import pipeline pipeline = pipeline(task="image-classification", model="facebook/dinov2-small-imagenet1k-1-layer") pipeline("https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png") [{'label': 'macaw', 'score': 0.997848391532898}, {'label': 'sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita', 'score': 0.0016551691805943847}, {'label': 'lorikeet', 'score': 0.00018523589824326336}, {'label': 'African grey, African gray, Psittacus erithacus', 'score': 7.85409429227002e-05}, {'label': 'quail', 'score': 5.502637941390276e-05}] ```
Visual question answering

```py from transformers import pipeline pipeline = pipeline(task="visual-question-answering", model="Salesforce/blip-vqa-base") pipeline( image="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-few-shot.jpg", question="What is in the image?", ) [{'answer': 'statue of liberty'}] ```

Why should I use Transformers?

  1. Easy-to-use state-of-the-art models:

    • High performance on natural language understanding & generation, computer vision, audio, video, and multimodal tasks.
    • Low barrier to entry for researchers, engineers, and developers.
    • Few user-facing abstractions with just three classes to learn.
    • A unified API for using all our pretrained models.
  2. Lower compute costs, smaller carbon footprint:

    • Share trained models instead of training from scratch.
    • Reduce compute time and production costs.
    • Dozens of model architectures with 1M+ pretrained checkpoints across all modalities.
  3. Choose the right framework for every part of a models lifetime:

    • Train state-of-the-art models in 3 lines of code.
    • Move a single model between PyTorch/JAX/TF2.0 frameworks at will.
    • Pick the right framework for training, evaluation, and production.
  4. Easily customize a model or an example to your needs:

    • We provide examples for each architecture to reproduce the results published by its original authors.
    • Model internals are exposed as consistently as possible.
    • Model files can be used independently of the library for quick experiments.

Hugging Face Enterprise Hub

Why shouldn't I use Transformers?

  • This library is not a modular toolbox of building blocks for neural nets. The code in the model files is not refactored with additional abstractions on purpose, so that researchers can quickly iterate on each of the models without diving into additional abstractions/files.
  • The training API is optimized to work with PyTorch models provided by Transformers. For generic machine learning loops, you should use another library like Accelerate.
  • The example scripts are only examples. They may not necessarily work out-of-the-box on your specific use case and you'll need to adapt the code for it to work.

100 projects using Transformers

Transformers is more than a toolkit to use pretrained models, it's a community of projects built around it and the Hugging Face Hub. We want Transformers to enable developers, researchers, students, professors, engineers, and anyone else to build their dream projects.

In order to celebrate Transformers 100,000 stars, we wanted to put the spotlight on the community with the awesome-transformers page which lists 100 incredible projects built with Transformers.

If you own or use a project that you believe should be part of the list, please open a PR to add it!

Example models

You can test most of our models directly on their Hub model pages.

Expand each modality below to see a few example models for various use cases.

Audio - Audio classification with [Whisper](https://huggingface.co/openai/whisper-large-v3-turbo) - Automatic speech recognition with [Moonshine](https://huggingface.co/UsefulSensors/moonshine) - Keyword spotting with [Wav2Vec2](https://huggingface.co/superb/wav2vec2-base-superb-ks) - Speech to speech generation with [Moshi](https://huggingface.co/kyutai/moshiko-pytorch-bf16) - Text to audio with [MusicGen](https://huggingface.co/facebook/musicgen-large) - Text to speech with [Bark](https://huggingface.co/suno/bark)
Computer vision - Automatic mask generation with [SAM](https://huggingface.co/facebook/sam-vit-base) - Depth estimation with [DepthPro](https://huggingface.co/apple/DepthPro-hf) - Image classification with [DINO v2](https://huggingface.co/facebook/dinov2-base) - Keypoint detection with [SuperPoint](https://huggingface.co/magic-leap-community/superpoint) - Keypoint matching with [SuperGlue](https://huggingface.co/magic-leap-community/superglue_outdoor) - Object detection with [RT-DETRv2](https://huggingface.co/PekingU/rtdetr_v2_r50vd) - Pose Estimation with [VitPose](https://huggingface.co/usyd-community/vitpose-base-simple) - Universal segmentation with [OneFormer](https://huggingface.co/shi-labs/oneformer_ade20k_swin_large) - Video classification with [VideoMAE](https://huggingface.co/MCG-NJU/videomae-large)
Multimodal - Audio or text to text with [Qwen2-Audio](https://huggingface.co/Qwen/Qwen2-Audio-7B) - Document question answering with [LayoutLMv3](https://huggingface.co/microsoft/layoutlmv3-base) - Image or text to text with [Qwen-VL](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct) - Image captioning [BLIP-2](https://huggingface.co/Salesforce/blip2-opt-2.7b) - OCR-based document understanding with [GOT-OCR2](https://huggingface.co/stepfun-ai/GOT-OCR-2.0-hf) - Table question answering with [TAPAS](https://huggingface.co/google/tapas-base) - Unified multimodal understanding and generation with [Emu3](https://huggingface.co/BAAI/Emu3-Gen) - Vision to text with [Llava-OneVision](https://huggingface.co/llava-hf/llava-onevision-qwen2-0.5b-ov-hf) - Visual question answering with [Llava](https://huggingface.co/llava-hf/llava-1.5-7b-hf) - Visual referring expression segmentation with [Kosmos-2](https://huggingface.co/microsoft/kosmos-2-patch14-224)
NLP - Masked word completion with [ModernBERT](https://huggingface.co/answerdotai/ModernBERT-base) - Named entity recognition with [Gemma](https://huggingface.co/google/gemma-2-2b) - Question answering with [Mixtral](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) - Summarization with [BART](https://huggingface.co/facebook/bart-large-cnn) - Translation with [T5](https://huggingface.co/google-t5/t5-base) - Text generation with [Llama](https://huggingface.co/meta-llama/Llama-3.2-1B) - Text classification with [Qwen](https://huggingface.co/Qwen/Qwen2.5-0.5B)

Citation

We now have a paper you can cite for the 🤗 Transformers library: bibtex @inproceedings{wolf-etal-2020-transformers, title = "Transformers: State-of-the-Art Natural Language Processing", author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", month = oct, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6", pages = "38--45" }

Owner

  • Name: Hugging Face
  • Login: huggingface
  • Kind: organization
  • Location: NYC + Paris

The AI community building the future.

Citation (CITATION.cff)

cff-version: "1.2.0"
date-released: 2020-10
message: "If you use this software, please cite it using these metadata."
title: "Transformers: State-of-the-Art Natural Language Processing"
url: "https://github.com/huggingface/transformers"
authors: 
  - family-names: Wolf
    given-names: Thomas
  - family-names: Debut
    given-names: Lysandre
  - family-names: Sanh
    given-names: Victor
  - family-names: Chaumond
    given-names: Julien
  - family-names: Delangue
    given-names: Clement
  - family-names: Moi
    given-names: Anthony
  - family-names: Cistac
    given-names: Perric
  - family-names: Ma
    given-names: Clara
  - family-names: Jernite
    given-names: Yacine
  - family-names: Plu
    given-names: Julien
  - family-names: Xu
    given-names: Canwen
  - family-names: "Le Scao"
    given-names: Teven
  - family-names: Gugger
    given-names: Sylvain
  - family-names: Drame
    given-names: Mariama
  - family-names: Lhoest
    given-names: Quentin
  - family-names: Rush
    given-names: "Alexander M."
preferred-citation:
  type: conference-paper
  authors:
  - family-names: Wolf
    given-names: Thomas
  - family-names: Debut
    given-names: Lysandre
  - family-names: Sanh
    given-names: Victor
  - family-names: Chaumond
    given-names: Julien
  - family-names: Delangue
    given-names: Clement
  - family-names: Moi
    given-names: Anthony
  - family-names: Cistac
    given-names: Perric
  - family-names: Ma
    given-names: Clara
  - family-names: Jernite
    given-names: Yacine
  - family-names: Plu
    given-names: Julien
  - family-names: Xu
    given-names: Canwen
  - family-names: "Le Scao"
    given-names: Teven
  - family-names: Gugger
    given-names: Sylvain
  - family-names: Drame
    given-names: Mariama
  - family-names: Lhoest
    given-names: Quentin
  - family-names: Rush
    given-names: "Alexander M."
  booktitle: "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations"
  month: 10
  start: 38
  end: 45
  title: "Transformers: State-of-the-Art Natural Language Processing"
  year: 2020
  publisher: "Association for Computational Linguistics"
  url: "https://www.aclweb.org/anthology/2020.emnlp-demos.6"
  address: "Online"

Committers

Last synced: 8 months ago

All Time
  • Total Commits: 18,330
  • Total Committers: 3,065
  • Avg Commits per committer: 5.98
  • Development Distribution Score (DDS): 0.932
Past Year
  • Commits: 3,049
  • Committers: 775
  • Avg Commits per committer: 3.934
  • Development Distribution Score (DDS): 0.938
Top Committers
Name Email Commits
Sylvain Gugger 3****r 1,240
Yih-Dar 2****h 1,067
Lysandre l****t@r****r 1,034
thomwolf t****f@g****m 946
Patrick von Platen p****n@g****m 786
Stas Bekman s****0 515
Joao Gante j****e@g****m 500
Julien Chaumond c****d@g****m 395
Arthur 4****r 374
Matt R****1 353
Younes Belkada 4****a 315
Sam Shleifer s****r@g****m 280
NielsRogge 4****e 274
amyeroberts 2****s 235
Nicolas Patry p****s@p****m 232
Suraj Patil s****5@g****m 201
Raushan Turganbay r****n@h****o 199
VictorSanh v****h@g****m 194
Manuel Romero m****8@g****m 149
Sanchit Gandhi 9****i 149
dependabot[bot] 4****] 147
Zach Mueller m****r@g****m 145
Morgan Funtowicz m****n@h****o 133
Steven Liu 5****u 124
Julien Plu p****n@g****m 113
Marc Sun 5****c 106
Aymeric Augustin a****n@f****m 95
Stefan Schweter s****n@s****t 83
Cyril Vallez c****z@h****o 82
Rémi Louf r****f@g****m 81
and 3,035 more...

Issues and Pull Requests

Last synced: 4 months ago

All Time
  • Total issues: 6,113
  • Total pull requests: 13,440
  • Average time to close issues: about 1 month
  • Average time to close pull requests: 17 days
  • Total issue authors: 4,309
  • Total pull request authors: 2,410
  • Average comments per issue: 4.29
  • Average comments per pull request: 2.98
  • Merged pull requests: 6,693
  • Bot issues: 0
  • Bot pull requests: 135
Past Year
  • Issues: 2,165
  • Pull requests: 6,716
  • Average time to close issues: 22 days
  • Average time to close pull requests: 9 days
  • Issue authors: 1,640
  • Pull request authors: 1,276
  • Average comments per issue: 2.21
  • Average comments per pull request: 2.38
  • Merged pull requests: 3,123
  • Bot issues: 0
  • Bot pull requests: 27
Top Authors
Issue Authors
  • guangy10 (27)
  • NielsRogge (24)
  • xenova (24)
  • andysingal (22)
  • ydshieh (20)
  • lucasjinreal (20)
  • rajveer43 (20)
  • jiqing-feng (19)
  • dvrogozh (18)
  • amyeroberts (18)
  • ArthurZucker (17)
  • gante (16)
  • zucchini-nlp (15)
  • RonanKMcGovern (15)
  • stas00 (14)
Pull Request Authors
  • ydshieh (732)
  • gante (575)
  • zucchini-nlp (483)
  • ArthurZucker (436)
  • AhmedAlmaghz (381)
  • Rocketknight1 (304)
  • younesbelkada (225)
  • Cyrilvallez (220)
  • amyeroberts (218)
  • SunMarc (191)
  • NielsRogge (158)
  • faaany (156)
  • yonigozlan (154)
  • qubvel (146)
  • dependabot[bot] (135)
Top Labels
Issue Labels
bug (1,708) Feature request (591) New model (228) WIP (115) Vision (108) trainer (86) Audio (80) Generation (80) Good First Issue (76) Good Second Issue (58) Core: Tokenization (47) wontfix (40) Usage (32) Cache (31) Multimodal (29) DeepSpeed (27) Good Difficult Issue (26) solved (26) Core: Pipeline (25) Quantization (24) PyTorch FSDP (22) Core: Modeling (21) Documentation (20) Should Fix (19) Accelerate (17) Examples (17) PEFT (16) Compilation (12) TensorFlow (11) ExecuTorch (10)
Pull Request Labels
run-slow (311) New model (155) Vision (141) dependencies (133) python (105) WIP (62) for patch (52) Agents (52) Audio (49) Processing (37) Multimodal (31) bug (28) Quantization (21) ExecuTorch (17) Feature request (15) trainer (14) DeepSpeed (13) SDPA (13) Flax (11) Generation (11) run-benchmark (11) single-model-run-slow (10) Tests (10) Documentation (10) optimization (10) Core: Modeling (9) TensorFlow (7) Cache (7) Compilation (7) Good Second Issue (7)

Packages

  • Total packages: 16
  • Total downloads:
    • pypi 95,266,422 last-month
  • Total docker downloads: 44,558,964
  • Total dependent packages: 2,619
    (may contain duplicates)
  • Total dependent repositories: 32,003
    (may contain duplicates)
  • Total versions: 306
  • Total maintainers: 14
  • Total advisories: 16
pypi.org: transformers

State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow

  • Versions: 196
  • Dependent Packages: 2,589
  • Dependent Repositories: 31,800
  • Downloads: 95,260,167 Last month
  • Docker Downloads: 44,558,964
Rankings
Stargazers count: 0.0%
Dependent packages count: 0.0%
Forks count: 0.0%
Dependent repos count: 0.0%
Downloads: 0.1%
Average: 0.1%
Docker downloads count: 0.6%
Last synced: 4 months ago
conda-forge.org: transformers
  • Versions: 68
  • Dependent Packages: 24
  • Dependent Repositories: 101
Rankings
Stargazers count: 0.1%
Forks count: 0.1%
Average: 1.6%
Dependent packages count: 2.8%
Dependent repos count: 3.4%
Last synced: 4 months ago
pypi.org: in-transformers

State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow

  • Versions: 1
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 18 Last month
Rankings
Stargazers count: 0.0%
Forks count: 0.0%
Dependent packages count: 6.6%
Average: 10.8%
Downloads: 16.5%
Dependent repos count: 30.6%
Maintainers (1)
Last synced: about 1 year ago
spack.io: py-transformers

State-of-the-art Natural Language Processing for TensorFlow 2.0 and PyTorch

  • Versions: 7
  • Dependent Packages: 1
  • Dependent Repositories: 0
Rankings
Dependent repos count: 0.0%
Forks count: 0.0%
Stargazers count: 0.0%
Average: 14.3%
Dependent packages count: 57.3%
Maintainers (1)
Last synced: 11 months ago
anaconda.org: transformers

Transformers provides thousands of pretrained models to perform tasks on different modalities such as text, vision, and audio. These models can be applied on: - 📝 Text, for tasks like text classification, information extraction, question answering, summarization, translation, text generation, in over 100 languages. - 🖼️ Images, for tasks like image classification, object detection, and segmentation. - 🗣️ Audio, for tasks like speech recognition and audio classification. Transformer models can also perform tasks on several modalities combined, such as table question answering, optical character recognition, information extraction from scanned documents, video classification, and visual question answering.

  • Versions: 15
  • Dependent Packages: 4
  • Dependent Repositories: 101
Rankings
Stargazers count: 0.2%
Forks count: 0.5%
Average: 15.1%
Dependent repos count: 18.6%
Dependent packages count: 41.0%
Last synced: 4 months ago
pypi.org: t-draft-123

State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow

  • Versions: 5
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 23 Last month
Rankings
Stargazers count: 0.0%
Forks count: 0.0%
Dependent packages count: 9.8%
Average: 16.2%
Dependent repos count: 55.1%
Maintainers (1)
Last synced: 4 months ago
pypi.org: transformers-phobert

State-of-the-art Natural Language Processing for TensorFlow 2.0 and PyTorch. Note that the tokenizer was changed by PhoBert in this version.

  • Versions: 2
  • Dependent Packages: 0
  • Dependent Repositories: 1
  • Downloads: 14 Last month
Rankings
Stargazers count: 0.0%
Forks count: 0.0%
Dependent packages count: 7.4%
Average: 16.3%
Dependent repos count: 22.2%
Downloads: 51.7%
Maintainers (1)
Last synced: 4 months ago
pypi.org: transformers-machinify

State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow

  • Versions: 1
  • Dependent Packages: 1
  • Dependent Repositories: 0
  • Downloads: 20 Last month
Rankings
Dependent packages count: 3.1%
Average: 16.8%
Dependent repos count: 30.5%
Maintainers (1)
Last synced: 4 months ago
pypi.org: transformers-v4.55.0-glm-4.5v-preview

State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow

  • Versions: 1
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 6,068 Last month
Rankings
Dependent packages count: 8.7%
Average: 28.9%
Dependent repos count: 49.0%
Maintainers (1)
Last synced: 4 months ago
pypi.org: transformers-qwenomni

Transformers: With code for Qwen 2.5 Omni

  • Versions: 2
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 9 Last month
Rankings
Dependent packages count: 9.1%
Average: 30.2%
Dependent repos count: 51.3%
Maintainers (1)
Last synced: 4 months ago
pypi.org: transformers-mcw

State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow

  • Versions: 1
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 23 Last month
Rankings
Dependent packages count: 9.2%
Average: 30.5%
Dependent repos count: 51.7%
Maintainers (1)
Last synced: 4 months ago
pypi.org: xmersawitransformers

State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow

  • Versions: 1
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 12 Last month
Rankings
Dependent packages count: 9.2%
Average: 30.5%
Dependent repos count: 51.7%
Maintainers (1)
Last synced: 4 months ago
pypi.org: mcwtransformers

State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow

  • Versions: 2
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 19 Last month
Rankings
Dependent packages count: 9.2%
Average: 30.5%
Dependent repos count: 51.7%
Maintainers (1)
Last synced: 4 months ago
pypi.org: mcwtimesformer

State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow

  • Versions: 1
  • Dependent Packages: 0
  • Dependent Repositories: 0
Rankings
Dependent packages count: 9.2%
Average: 30.5%
Dependent repos count: 51.7%
Maintainers (1)
Last synced: 5 months ago
pypi.org: my-transformers

State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow

  • Versions: 2
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 32 Last month
Rankings
Dependent packages count: 9.6%
Average: 31.7%
Dependent repos count: 53.9%
Maintainers (1)
Last synced: 4 months ago
pypi.org: divyanx-transformers

State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow

  • Versions: 1
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 17 Last month
Rankings
Dependent packages count: 10.5%
Average: 34.7%
Dependent repos count: 58.8%
Maintainers (1)
Last synced: 4 months ago

Dependencies

.github/workflows/add-model-like.yml actions
  • actions/cache v2 composite
  • actions/checkout v3 composite
  • actions/upload-artifact v3 composite
.github/workflows/build-docker-images.yml actions
  • actions/checkout v3 composite
  • docker/build-push-action v5 composite
  • docker/login-action v3 composite
  • docker/setup-buildx-action v3 composite
.github/workflows/build-nightly-ci-docker-images.yml actions
  • actions/checkout v3 composite
  • docker/build-push-action v3 composite
  • docker/login-action v2 composite
  • docker/setup-buildx-action v2 composite
.github/workflows/build-past-ci-docker-images.yml actions
  • actions/checkout v3 composite
  • docker/build-push-action v3 composite
  • docker/login-action v2 composite
  • docker/setup-buildx-action v2 composite
.github/workflows/build_documentation.yml actions
.github/workflows/build_pr_documentation.yml actions
.github/workflows/check_tiny_models.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
  • actions/upload-artifact v3 composite
.github/workflows/doctests.yml actions
  • actions/checkout v3 composite
  • actions/download-artifact v3 composite
  • actions/upload-artifact v3 composite
.github/workflows/model-templates.yml actions
  • actions/cache v2 composite
  • actions/checkout v3 composite
  • actions/upload-artifact v3 composite
.github/workflows/release-conda.yml actions
  • actions/checkout v1 composite
  • conda-incubator/setup-miniconda v2 composite
.github/workflows/self-nightly-past-ci-caller.yml actions
.github/workflows/self-nightly-scheduled.yml actions
  • actions/checkout v3 composite
  • actions/download-artifact v3 composite
  • actions/upload-artifact v3 composite
  • geekyeggo/delete-artifact v2 composite
.github/workflows/self-past.yml actions
  • actions/checkout v3 composite
  • actions/download-artifact v3 composite
  • actions/upload-artifact v3 composite
  • geekyeggo/delete-artifact v2 composite
.github/workflows/self-push-amd.yml actions
  • actions/checkout v3 composite
  • actions/download-artifact v3 composite
  • actions/upload-artifact v3 composite
.github/workflows/self-push-caller.yml actions
  • actions/checkout v3 composite
  • tj-actions/changed-files v22.2 composite
.github/workflows/self-push.yml actions
  • actions/checkout v3 composite
  • actions/download-artifact v3 composite
  • actions/upload-artifact v3 composite
.github/workflows/self-scheduled.yml actions
  • actions/checkout v3 composite
  • actions/download-artifact v3 composite
  • actions/upload-artifact v3 composite
.github/workflows/stale.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
.github/workflows/update_metdata.yml actions
  • actions/checkout v3 composite
.github/workflows/upload_pr_documentation.yml actions
docker/transformers-all-latest-gpu/Dockerfile docker
  • nvidia/cuda 11.8.0-cudnn8-devel-ubuntu20.04 build
docker/transformers-doc-builder/Dockerfile docker
  • python 3.8 build
docker/transformers-gpu/Dockerfile docker
  • nvidia/cuda 10.2-cudnn7-devel-ubuntu18.04 build
docker/transformers-past-gpu/Dockerfile docker
  • $BASE_DOCKER_IMAGE latest build
docker/transformers-pytorch-amd-gpu/Dockerfile docker
  • rocm/pytorch rocm5.6_ubuntu20.04_py3.8_pytorch_2.0.1 build
docker/transformers-pytorch-deepspeed-latest-gpu/Dockerfile docker
  • nvcr.io/nvidia/pytorch 22.12-py3 build
docker/transformers-pytorch-deepspeed-nightly-gpu/Dockerfile docker
  • nvcr.io/nvidia/pytorch 22.12-py3 build
docker/transformers-pytorch-gpu/Dockerfile docker
  • nvidia/cuda 11.8.0-cudnn8-devel-ubuntu20.04 build
docker/transformers-pytorch-tpu/Dockerfile docker
  • google/cloud-sdk slim build
docker/transformers-tensorflow-gpu/Dockerfile docker
  • nvidia/cuda 11.8.0-cudnn8-devel-ubuntu20.04 build
examples/research_projects/quantization-qdqbert/Dockerfile docker
  • nvcr.io/nvidia/pytorch 22.02-py3 build
examples/flax/_tests_requirements.txt pypi
  • conllu * test
  • datasets >=1.1.3 test
  • evaluate >=0.2.0 test
  • nltk * test
  • pytest * test
  • rouge-score * test
  • seqeval * test
  • tensorboard * test
examples/flax/language-modeling/requirements.txt pypi
  • datasets >=1.1.3
  • flax >=0.3.5
  • jax >=0.2.8
  • jaxlib >=0.1.59
  • optax >=0.0.9
examples/flax/question-answering/requirements.txt pypi
  • datasets >=1.8.0
  • flax >=0.3.5
  • jax >=0.2.17
  • jaxlib >=0.1.68
  • optax >=0.0.8
examples/flax/summarization/requirements.txt pypi
  • datasets >=1.1.3
  • evaluate >=0.2.0
  • flax >=0.3.5
  • jax >=0.2.8
  • jaxlib >=0.1.59
  • optax >=0.0.8
examples/flax/text-classification/requirements.txt pypi
  • datasets >=1.1.3
  • flax >=0.3.5
  • jax >=0.2.8
  • jaxlib >=0.1.59
  • optax >=0.0.8
examples/flax/token-classification/requirements.txt pypi
  • datasets >=1.8.0
  • flax >=0.3.5
  • jax >=0.2.8
  • jaxlib >=0.1.59
  • optax >=0.0.8
  • seqeval *
examples/flax/vision/requirements.txt pypi
  • flax >=0.3.5
  • jax >=0.2.8
  • jaxlib >=0.1.59
  • optax >=0.0.8
  • torch ==1.11.0
  • torchvision ==0.12.0
examples/legacy/pytorch-lightning/requirements.txt pypi
  • conllu *
  • datasets >=1.1.3
  • elasticsearch *
  • faiss-cpu *
  • fire *
  • git-python ==1.0.3
  • matplotlib *
  • nltk *
  • pandas *
  • protobuf *
  • psutil *
  • pytest *
  • ray *
  • rouge-score *
  • sacrebleu *
  • scikit-learn *
  • sentencepiece *
  • seqeval *
  • streamlit *
  • tensorboard *
  • tensorflow_datasets *
examples/legacy/seq2seq/requirements.txt pypi
  • conllu *
  • datasets >=1.1.3
  • elasticsearch *
  • faiss-cpu *
  • fire *
  • git-python ==1.0.3
  • matplotlib *
  • nltk *
  • pandas *
  • protobuf *
  • psutil *
  • pytest *
  • rouge-score *
  • sacrebleu *
  • scikit-learn *
  • sentencepiece *
  • seqeval *
  • streamlit *
  • tensorboard *
  • tensorflow_datasets *
examples/pytorch/_tests_requirements.txt pypi
  • accelerate main test
  • conllu * test
  • datasets >=1.13.3 test
  • elasticsearch * test
  • evaluate >=0.2.0 test
  • faiss-cpu * test
  • fire * test
  • git-python ==1.0.3 test
  • jiwer * test
  • librosa * test
  • matplotlib * test
  • nltk * test
  • pandas * test
  • protobuf * test
  • psutil * test
  • pytest * test
  • rouge-score * test
  • sacrebleu >=1.4.12 test
  • scikit-learn * test
  • sentencepiece * test
  • seqeval * test
  • streamlit * test
  • tensorboard * test
  • tensorflow_datasets * test
  • torchvision * test
examples/pytorch/audio-classification/requirements.txt pypi
  • datasets >=1.14.0
  • evaluate *
  • librosa *
  • torch >=1.6
  • torchaudio *
examples/pytorch/contrastive-image-text/requirements.txt pypi
  • datasets >=1.8.0
  • torch >=1.5.0
  • torchvision >=0.6.0
examples/pytorch/image-classification/requirements.txt pypi
  • accelerate >=0.12.0
  • datasets >=1.17.0
  • evaluate *
  • torch >=1.5.0
  • torchvision >=0.6.0
examples/pytorch/image-pretraining/requirements.txt pypi
  • datasets >=1.8.0
  • torch >=1.5.0
  • torchvision >=0.6.0
examples/pytorch/language-modeling/requirements.txt pypi
  • accelerate >=0.12.0
  • datasets >=1.8.0
  • evaluate *
  • protobuf *
  • scikit-learn *
  • sentencepiece *
  • torch >=1.3
examples/pytorch/multiple-choice/requirements.txt pypi
  • accelerate >=0.12.0
  • evaluate *
  • protobuf *
  • sentencepiece *
  • torch >=1.3
examples/pytorch/question-answering/requirements.txt pypi
  • accelerate >=0.12.0
  • datasets >=1.8.0
  • evaluate *
  • torch >=1.3.0
examples/pytorch/semantic-segmentation/requirements.txt pypi
  • datasets >=2.0.0
  • evaluate *
  • torch >=1.3
examples/pytorch/speech-pretraining/requirements.txt pypi
  • accelerate >=0.12.0
  • datasets >=1.12.0
  • librosa *
  • torch >=1.5
  • torchaudio *
examples/pytorch/speech-recognition/requirements.txt pypi
  • datasets >=1.18.0
  • evaluate *
  • jiwer *
  • librosa *
  • torch >=1.5
  • torchaudio *
examples/pytorch/summarization/requirements.txt pypi
  • accelerate >=0.12.0
  • datasets >=1.8.0
  • evaluate *
  • nltk *
  • protobuf *
  • py7zr *
  • rouge-score *
  • sentencepiece *
  • torch >=1.3
examples/pytorch/text-classification/requirements.txt pypi
  • accelerate >=0.12.0
  • datasets >=1.8.0
  • evaluate *
  • protobuf *
  • scikit-learn *
  • scipy *
  • sentencepiece *
  • torch >=1.3
examples/pytorch/text-generation/requirements.txt pypi
  • accelerate >=0.21.0
  • protobuf *
  • sentencepiece *
  • torch >=1.3
examples/pytorch/token-classification/requirements.txt pypi
  • accelerate >=0.12.0
  • datasets >=1.8.0
  • evaluate *
  • seqeval *
  • torch >=1.3
examples/pytorch/translation/requirements.txt pypi
  • accelerate >=0.12.0
  • datasets >=1.8.0
  • evaluate *
  • protobuf *
  • py7zr *
  • sacrebleu >=1.4.12
  • sentencepiece *
  • torch >=1.3
examples/research_projects/adversarial/requirements.txt pypi
  • transformers ==3.5.1
examples/research_projects/bert-loses-patience/requirements.txt pypi
  • transformers ==3.5.1
examples/research_projects/bertabs/requirements.txt pypi
  • nltk *
  • py-rouge *
  • transformers ==3.5.1
examples/research_projects/bertology/requirements.txt pypi
  • transformers ==3.5.1
examples/research_projects/codeparrot/examples/requirements.txt pypi
  • datasets ==2.3.2
  • evaluate ==0.2.2
  • scikit-learn ==1.1.2
  • transformers ==4.21.1
  • wandb ==0.13.1
examples/research_projects/codeparrot/requirements.txt pypi
  • datasets ==1.16.0
  • datasketch ==1.5.7
  • dpu_utils *
  • huggingface-hub ==0.1.0
  • tensorboard ==2.6.0
  • torch ==1.11.0
  • transformers ==4.19.0
  • wandb ==0.12.0
examples/research_projects/decision_transformer/requirements.txt pypi
  • APScheduler ==3.9.1
  • Brotli ==1.0.9
  • Cython ==0.29.28
  • Deprecated ==1.2.13
  • Flask ==2.3.2
  • Flask-Compress ==1.11
  • GitPython ==3.1.32
  • Jinja2 ==2.11.3
  • Keras-Preprocessing ==1.1.2
  • Mako ==1.2.2
  • Markdown ==3.3.6
  • MarkupSafe ==1.1.1
  • Pillow ==9.3.0
  • Pint ==0.16.1
  • PyYAML ==6.0
  • Pygments ==2.15.0
  • SQLAlchemy ==1.4.32
  • SoundFile ==0.10.3.post1
  • Werkzeug ==2.2.3
  • absl-py ==1.0.0
  • aiohttp ==3.8.5
  • aiosignal ==1.2.0
  • alembic ==1.7.7
  • appdirs ==1.4.4
  • arrow ==1.2.2
  • asttokens ==2.0.5
  • astunparse ==1.6.3
  • async-timeout ==4.0.2
  • attrs ==21.4.0
  • audioread ==2.1.9
  • autopage ==0.5.0
  • backcall ==0.2.0
  • backoff ==1.11.1
  • backports.zoneinfo ==0.2.1
  • binaryornot ==0.4.4
  • black ==22.1.0
  • boto3 ==1.16.34
  • botocore ==1.19.63
  • cachetools ==5.0.0
  • certifi ==2023.7.22
  • cffi ==1.15.0
  • chardet ==4.0.0
  • charset-normalizer ==2.0.12
  • chex ==0.1.1
  • click ==8.0.4
  • cliff ==3.10.1
  • clldutils ==3.11.1
  • cloudpickle ==2.0.0
  • cmaes ==0.8.2
  • cmd2 ==2.4.0
  • codecarbon ==1.2.0
  • colorlog ==6.6.0
  • cookiecutter ==2.1.1
  • cryptography ==41.0.2
  • csvw ==2.0.0
  • cycler ==0.11.0
  • dash ==2.3.0
  • dash-bootstrap-components ==1.0.3
  • dash-core-components ==2.0.0
  • dash-html-components ==2.0.0
  • dash-table ==5.0.0
  • datasets ==2.0.0
  • decorator ==5.1.1
  • dill ==0.3.4
  • dlinfo ==1.2.1
  • dm-tree ==0.1.6
  • docker ==4.4.4
  • execnet ==1.9.0
  • executing ==0.8.3
  • faiss-cpu ==1.7.2
  • fasteners ==0.17.3
  • filelock ==3.6.0
  • fire ==0.4.0
  • flake8 ==4.0.1
  • flatbuffers ==2.0
  • flax ==0.4.0
  • fonttools ==4.31.1
  • frozenlist ==1.3.0
  • fsspec ==2022.2.0
  • fugashi ==1.1.2
  • gast ==0.5.3
  • gitdb ==4.0.9
  • glfw ==2.5.1
  • google-auth ==2.6.2
  • google-auth-oauthlib ==0.4.6
  • google-pasta ==0.2.0
  • greenlet ==1.1.2
  • grpcio ==1.44.0
  • gym ==0.23.1
  • gym-notices ==0.0.6
  • h5py ==3.6.0
  • huggingface-hub ==0.4.0
  • hypothesis ==6.39.4
  • idna ==3.3
  • imageio ==2.16.1
  • importlib-metadata ==4.11.3
  • importlib-resources ==5.4.0
  • iniconfig ==1.1.1
  • ipadic ==1.0.0
  • ipython ==8.10.0
  • isodate ==0.6.1
  • isort ==5.10.1
  • itsdangerous ==2.1.1
  • jax ==0.3.4
  • jaxlib ==0.3.2
  • jedi ==0.18.1
  • jinja2-time ==0.2.0
  • jmespath ==0.10.0
  • joblib ==1.2.0
  • jsonschema ==4.4.0
  • keras ==2.8.0
  • kiwisolver ==1.4.0
  • kubernetes ==12.0.1
  • libclang ==13.0.0
  • librosa ==0.9.1
  • llvmlite ==0.38.0
  • matplotlib ==3.5.1
  • matplotlib-inline ==0.1.3
  • mccabe ==0.6.1
  • msgpack ==1.0.3
  • mujoco-py ==2.1.2.14
  • multidict ==6.0.2
  • multiprocess ==0.70.12.2
  • mypy-extensions ==0.4.3
  • nltk ==3.7
  • numba ==0.55.1
  • numpy ==1.22.3
  • oauthlib ==3.2.2
  • onnx ==1.13.0
  • onnxconverter-common ==1.9.0
  • opt-einsum ==3.3.0
  • optax ==0.1.1
  • optuna ==2.10.0
  • packaging ==21.3
  • pandas ==1.4.1
  • parameterized ==0.8.1
  • parso ==0.8.3
  • pathspec ==0.9.0
  • pbr ==5.8.1
  • pexpect ==4.8.0
  • phonemizer ==3.0.1
  • pickleshare ==0.7.5
  • plac ==1.3.4
  • platformdirs ==2.5.1
  • plotly ==5.6.0
  • pluggy ==1.0.0
  • pooch ==1.6.0
  • portalocker ==2.0.0
  • poyo ==0.5.0
  • prettytable ==3.2.0
  • prompt-toolkit ==3.0.28
  • protobuf ==3.19.5
  • psutil ==5.9.0
  • ptyprocess ==0.7.0
  • pure-eval ==0.2.2
  • py ==1.11.0
  • py-cpuinfo ==8.0.0
  • pyOpenSSL ==22.0.0
  • pyarrow ==7.0.0
  • pyasn1 ==0.4.8
  • pyasn1-modules ==0.2.8
  • pycodestyle ==2.8.0
  • pycparser ==2.21
  • pyctcdecode ==0.3.0
  • pyflakes ==2.4.0
  • pygtrie ==2.4.2
  • pynvml ==11.4.1
  • pyparsing ==3.0.7
  • pyperclip ==1.8.2
  • pypng ==0.0.21
  • pyrsistent ==0.18.1
  • pytest ==7.1.1
  • pytest-forked ==1.4.0
  • pytest-timeout ==2.1.0
  • pytest-xdist ==2.5.0
  • python-dateutil ==2.8.2
  • python-slugify ==6.1.1
  • pytz ==2022.1
  • pytz-deprecation-shim ==0.1.0.post0
  • ray ==1.11.0
  • redis ==4.5.4
  • regex ==2022.3.15
  • requests ==2.31.0
  • requests-oauthlib ==1.3.1
  • resampy ==0.2.2
  • responses ==0.18.0
  • rfc3986 ==1.5.0
  • rouge-score ==0.0.4
  • rsa ==4.8
  • s3transfer ==0.3.7
  • sacrebleu ==1.5.1
  • sacremoses ==0.0.49
  • scikit-learn ==1.0.2
  • scipy ==1.8.0
  • segments ==2.2.0
  • sentencepiece ==0.1.96
  • sigopt ==8.2.0
  • six ==1.16.0
  • smmap ==5.0.0
  • sortedcontainers ==2.4.0
  • stack-data ==0.2.0
  • stevedore ==3.5.0
  • tabulate ==0.8.9
  • tenacity ==8.0.1
  • tensorboard ==2.8.0
  • tensorboard-data-server ==0.6.1
  • tensorboard-plugin-wit ==1.8.1
  • tensorboardX ==2.5
  • tensorflow ==2.8.1
  • tensorflow-io-gcs-filesystem ==0.24.0
  • termcolor ==1.1.0
  • text-unidecode ==1.3
  • tf-estimator-nightly ==2.8.0.dev2021122109
  • tf2onnx ==1.9.3
  • threadpoolctl ==3.1.0
  • timeout-decorator ==0.5.0
  • timm ==0.5.4
  • tokenizers ==0.11.6
  • tomli ==2.0.1
  • toolz ==0.11.2
  • torch ==1.11.0
  • torchaudio ==0.11.0
  • torchvision ==0.12.0
  • tqdm ==4.63.0
  • traitlets ==5.1.1
  • typing-extensions ==4.1.1
  • tzdata ==2022.1
  • tzlocal ==4.1
  • unidic ==1.1.0
  • unidic-lite ==1.0.8
  • uritemplate ==4.1.1
  • urllib3 ==1.26.9
  • wasabi ==0.9.0
  • wcwidth ==0.2.5
  • websocket-client ==1.3.1
  • wrapt ==1.14.0
  • xxhash ==3.0.0
  • yarl ==1.7.2
  • zipp ==3.7.0
examples/research_projects/deebert/requirements.txt pypi
  • transformers ==3.5.1
examples/research_projects/distillation/requirements.txt pypi
  • gitpython ==3.1.32
  • psutil ==5.6.6
  • scipy >=1.4.1
  • tensorboard >=1.14.0
  • tensorboardX ==1.8
  • transformers *
examples/research_projects/fsner/pyproject.toml pypi
examples/research_projects/fsner/requirements.txt pypi
  • transformers >=4.9.2
examples/research_projects/fsner/setup.py pypi
  • torch >=1.9.0
examples/research_projects/information-gain-filtration/requirements.txt pypi
  • joblib >=0.13.2
  • matplotlib *
  • numpy >=1.17.2
  • scipy *
  • torch >=1.10.1
  • transformers >=3.5
examples/research_projects/jax-projects/big_bird/requirements.txt pypi
  • datasets *
  • flax *
  • jsonlines *
  • sentencepiece *
  • wandb *
examples/research_projects/jax-projects/hybrid_clip/requirements.txt pypi
  • flax >=0.3.5
  • jax >=0.2.8
  • jaxlib >=0.1.59
  • optax >=0.0.8
  • torch ==1.9.0
  • torchvision ==0.10.0
examples/research_projects/layoutlmv3/requirements.txt pypi
  • datasets *
  • pillow *
  • seqeval *
examples/research_projects/longform-qa/requirements.txt pypi
  • datasets >=1.1.3
  • elasticsearch *
  • faiss-cpu *
  • streamlit *
examples/research_projects/lxmert/requirements.txt pypi
  • CacheControl ==0.12.6
  • Jinja2 >=2.11.3
  • MarkupSafe ==1.1.1
  • Pillow >=8.1.1
  • PyYAML >=5.4
  • Pygments >=2.7.4
  • QtPy ==1.9.0
  • Send2Trash ==1.5.0
  • appdirs ==1.4.3
  • argon2-cffi ==20.1.0
  • async-generator ==1.10
  • attrs ==20.2.0
  • backcall ==0.2.0
  • certifi ==2023.7.22
  • cffi ==1.14.2
  • chardet ==3.0.4
  • click ==7.1.2
  • colorama ==0.4.3
  • contextlib2 ==0.6.0
  • cycler ==0.10.0
  • datasets ==1.0.0
  • decorator ==4.4.2
  • defusedxml ==0.6.0
  • dill ==0.3.2
  • distlib ==0.3.0
  • distro ==1.4.0
  • entrypoints ==0.3
  • filelock ==3.0.12
  • future ==0.18.3
  • html5lib ==1.0.1
  • idna ==2.8
  • ipaddr ==2.2.0
  • ipykernel ==5.3.4
  • ipython *
  • ipython-genutils ==0.2.0
  • ipywidgets ==7.5.1
  • jedi ==0.17.2
  • joblib ==1.2.0
  • jsonschema ==3.2.0
  • jupyter ==1.0.0
  • jupyter-client ==6.1.7
  • jupyter-console ==6.2.0
  • jupyter-core ==4.6.3
  • jupyterlab-pygments ==0.1.1
  • kiwisolver ==1.2.0
  • lockfile ==0.12.2
  • matplotlib ==3.3.1
  • mistune ==2.0.3
  • msgpack ==0.6.2
  • nbclient ==0.5.0
  • nbconvert ==6.5.1
  • nbformat ==5.0.7
  • nest-asyncio ==1.4.0
  • notebook ==6.4.12
  • numpy ==1.22.0
  • opencv-python ==4.4.0.42
  • packaging ==20.3
  • pandas ==1.1.2
  • pandocfilters ==1.4.2
  • parso ==0.7.1
  • pep517 ==0.8.2
  • pexpect ==4.8.0
  • pickleshare ==0.7.5
  • progress ==1.5
  • prometheus-client ==0.8.0
  • prompt-toolkit ==3.0.7
  • ptyprocess ==0.6.0
  • pyaml ==20.4.0
  • pyarrow ==1.0.1
  • pycparser ==2.20
  • pyparsing ==2.4.6
  • pyrsistent ==0.16.0
  • python-dateutil ==2.8.1
  • pytoml ==0.1.21
  • pytz ==2020.1
  • pyzmq ==19.0.2
  • qtconsole ==4.7.7
  • regex ==2020.7.14
  • requests ==2.31.0
  • retrying ==1.3.3
  • sacremoses ==0.0.43
  • sentencepiece ==0.1.91
  • six ==1.14.0
  • terminado ==0.8.3
  • testpath ==0.4.4
  • tokenizers ==0.8.1rc2
  • torch ==1.6.0
  • torchvision ==0.7.0
  • tornado ==6.3.3
  • tqdm ==4.48.2
  • traitlets *
  • urllib3 ==1.26.5
  • wcwidth ==0.2.5
  • webencodings ==0.5.1
  • wget ==3.2
  • widgetsnbextension ==3.5.1
  • xxhash ==2.0.0
examples/research_projects/mlm_wwm/requirements.txt pypi
  • datasets >=1.1.3
  • ltp *
  • protobuf *
  • sentencepiece *
examples/research_projects/movement-pruning/requirements.txt pypi
  • h5py >=2.10.0
  • knockknock >=0.1.8.1
  • numpy >=1.18.2
  • scipy >=1.4.1
  • torch >=1.4.0
examples/research_projects/onnx/summarization/requirements.txt pypi
  • torch >=1.10
examples/research_projects/pplm/requirements.txt pypi
  • conllu *
  • datasets >=1.1.3
  • elasticsearch *
  • faiss-cpu *
  • fire *
  • git-python ==1.0.3
  • matplotlib *
  • nltk *
  • pandas *
  • protobuf *
  • psutil *
  • pytest *
  • pytorch-lightning *
  • rouge-score *
  • sacrebleu *
  • scikit-learn *
  • sentencepiece *
  • seqeval *
  • streamlit *
  • tensorboard *
  • tensorflow_datasets *
  • transformers ==3.5.1
examples/research_projects/rag/requirements.txt pypi
  • GitPython *
  • datasets >=1.0.1
  • faiss-cpu >=1.6.3
  • psutil >=5.7.0
  • pytorch-lightning >=1.5.10,<=1.6.0
  • ray >=1.10.0
  • torch >=1.4.0
  • transformers *
examples/research_projects/rag-end2end-retriever/requirements.txt pypi
  • datasets *
  • faiss-cpu >=1.7.2
  • nvidia-ml-py3 ==7.352.0
  • psutil >=5.9.1
  • pytorch-lightning ==1.6.4
  • ray >=1.13.0
  • torch >=1.11.0
examples/research_projects/self-training-text-classification/requirements.txt pypi
  • accelerate *
  • datasets >=1.8.0
  • protobuf *
  • scikit-learn *
  • scipy *
  • sentencepiece *
  • torch >=1.3
examples/research_projects/seq2seq-distillation/requirements.txt pypi
  • conllu *
  • datasets >=1.1.3
  • elasticsearch *
  • faiss-cpu *
  • fire *
  • git-python ==1.0.3
  • matplotlib *
  • nltk *
  • pandas *
  • protobuf *
  • psutil *
  • pytest *
  • pytorch-lightning *
  • rouge-score *
  • sacrebleu *
  • scikit-learn *
  • sentencepiece *
  • streamlit *
  • tensorboard *
  • tensorflow_datasets *
examples/research_projects/tapex/requirements.txt pypi
  • datasets *
  • nltk *
  • numpy *
  • pandas *
examples/research_projects/visual_bert/requirements.txt pypi
  • CacheControl ==0.12.6
  • Jinja2 >=2.11.3
  • MarkupSafe ==1.1.1
  • Pillow >=8.1.1
  • PyYAML >=5.4
  • Pygments >=2.7.4
  • QtPy ==1.9.0
  • Send2Trash ==1.5.0
  • appdirs ==1.4.3
  • argon2-cffi ==20.1.0
  • async-generator ==1.10
  • attrs ==20.2.0
  • backcall ==0.2.0
  • certifi ==2023.7.22
  • cffi ==1.14.2
  • chardet ==3.0.4
  • click ==7.1.2
  • colorama ==0.4.3
  • contextlib2 ==0.6.0
  • cycler ==0.10.0
  • datasets ==1.0.0
  • decorator ==4.4.2
  • defusedxml ==0.6.0
  • dill ==0.3.2
  • distlib ==0.3.0
  • distro ==1.4.0
  • entrypoints ==0.3
  • filelock ==3.0.12
  • future ==0.18.3
  • html5lib ==1.0.1
  • idna ==2.8
  • ipaddr ==2.2.0
  • ipykernel ==5.3.4
  • ipython *
  • ipython-genutils ==0.2.0
  • ipywidgets ==7.5.1
  • jedi ==0.17.2
  • joblib ==1.2.0
  • jsonschema ==3.2.0
  • jupyter ==1.0.0
  • jupyter-client ==6.1.7
  • jupyter-console ==6.2.0
  • jupyter-core ==4.6.3
  • jupyterlab-pygments ==0.1.1
  • kiwisolver ==1.2.0
  • lockfile ==0.12.2
  • matplotlib ==3.3.1
  • mistune ==2.0.3
  • msgpack ==0.6.2
  • nbclient ==0.5.0
  • nbconvert ==6.5.1
  • nbformat ==5.0.7
  • nest-asyncio ==1.4.0
  • notebook ==6.4.12
  • numpy ==1.22.0
  • opencv-python ==4.4.0.42
  • packaging ==20.3
  • pandas ==1.1.2
  • pandocfilters ==1.4.2
  • parso ==0.7.1
  • pep517 ==0.8.2
  • pexpect ==4.8.0
  • pickleshare ==0.7.5
  • progress ==1.5
  • prometheus-client ==0.8.0
  • prompt-toolkit ==3.0.7
  • ptyprocess ==0.6.0
  • pyaml ==20.4.0
  • pyarrow ==1.0.1
  • pycparser ==2.20
  • pyparsing ==2.4.6
  • pyrsistent ==0.16.0
  • python-dateutil ==2.8.1
  • pytoml ==0.1.21
  • pytz ==2020.1
  • pyzmq ==19.0.2
  • qtconsole ==4.7.7
  • regex ==2020.7.14
  • requests ==2.31.0
  • retrying ==1.3.3
  • sacremoses ==0.0.43
  • sentencepiece ==0.1.91
  • six ==1.14.0
  • terminado ==0.8.3
  • testpath ==0.4.4
  • tokenizers ==0.8.1rc2
  • torch ==1.6.0
  • torchvision ==0.7.0
  • tornado ==6.3.3
  • tqdm ==4.48.2
  • traitlets *
  • urllib3 ==1.26.5
  • wcwidth ==0.2.5
  • webencodings ==0.5.1
  • wget ==3.2
  • widgetsnbextension ==3.5.1
  • xxhash ==2.0.0
examples/research_projects/vqgan-clip/requirements.txt pypi
  • Pillow *
  • PyYAML *
  • einops *
  • gradio *
  • icecream *
  • imageio *
  • lpips *
  • matplotlib *
  • more_itertools *
  • numpy *
  • omegaconf *
  • opencv_python_headless *
  • pudb *
  • pytorch_lightning *
  • requests *
  • scikit_image *
  • scipy *
  • setuptools *
  • streamlit *
  • taming-transformers *
  • tokenizers ==0.13.2
  • torch *
  • torchvision *
  • tqdm *
  • transformers ==4.26.0
  • typing_extensions *
  • wandb *
examples/research_projects/wav2vec2/requirements.txt pypi
  • datasets *
  • jiwer ==2.2.0
  • lang-trans ==0.6.0
  • librosa ==0.8.0
  • torch >=1.5.0
  • torchaudio *
  • transformers *
examples/research_projects/xtreme-s/requirements.txt pypi
  • datasets >=1.18.0
  • jiwer *
  • librosa *
  • torch >=1.5
  • torchaudio *
examples/tensorflow/_tests_requirements.txt pypi
  • conllu * test
  • datasets >=1.13.3 test
  • elasticsearch * test
  • evaluate >=0.2.0 test
  • faiss-cpu * test
  • fire * test
  • git-python ==1.0.3 test
  • jiwer * test
  • librosa * test
  • matplotlib * test
  • nltk * test
  • pandas * test
  • protobuf * test
  • psutil * test
  • pytest * test
  • rouge-score * test
  • sacrebleu >=1.4.12 test
  • scikit-learn * test
  • sentencepiece * test
  • seqeval * test
  • streamlit * test
  • tensorboard * test
  • tensorflow <2.15 test
  • tensorflow_datasets * test
examples/tensorflow/benchmarking/requirements.txt pypi
  • tensorflow >=2.3
examples/tensorflow/contrastive-image-text/requirements.txt pypi
  • datasets >=1.8.0
  • tensorflow >=2.6.0
examples/tensorflow/image-classification/requirements.txt pypi
  • datasets >=1.17.0
  • evaluate *
  • tensorflow >=2.4
examples/tensorflow/language-modeling/requirements.txt pypi
  • datasets >=1.8.0
  • sentencepiece *
examples/tensorflow/language-modeling-tpu/requirements.txt pypi
  • datasets ==2.9.0
  • tokenizers ==0.13.2
  • transformers ==4.26.1
examples/tensorflow/multiple-choice/requirements.txt pypi
  • protobuf *
  • sentencepiece *
  • tensorflow >=2.3
examples/tensorflow/question-answering/requirements.txt pypi
  • datasets >=1.4.0
  • evaluate >=0.2.0
  • tensorflow >=2.3.0
examples/tensorflow/summarization/requirements.txt pypi
  • datasets >=1.4.0
  • evaluate >=0.2.0
  • tensorflow >=2.3.0
examples/tensorflow/text-classification/requirements.txt pypi
  • datasets >=1.1.3
  • evaluate >=0.2.0
  • protobuf *
  • sentencepiece *
  • tensorflow >=2.3
examples/tensorflow/token-classification/requirements.txt pypi
  • datasets >=1.4.0
  • evaluate >=0.2.0
  • tensorflow >=2.3.0
examples/tensorflow/translation/requirements.txt pypi
  • datasets >=1.4.0
  • evaluate >=0.2.0
  • tensorflow >=2.3.0
pyproject.toml pypi