torchtune

PyTorch native post-training library

https://github.com/pytorch/torchtune

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Committers with academic emails
    5 of 142 committers (3.5%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (12.1%) to scientific vocabulary

Keywords from Contributors

transformer optimizer quantization sparsity training llama inference offloading mx float8
Last synced: 6 months ago · JSON representation ·

Repository

PyTorch native post-training library

Basic Info
Statistics
  • Stars: 5,446
  • Watchers: 46
  • Forks: 668
  • Open Issues: 427
  • Releases: 10
Created over 2 years ago · Last pushed 6 months ago
Metadata Files
Readme Contributing License Code of conduct Citation

README.md

torchtune

Unit Test Integration Tests

Overview | Installation | Get Started | Documentation | Community | Citing torchtune | License

📣 Recent updates 📣

  • May 2025: torchtune has added support for Qwen3 models! Check out all the configs here
  • April 2025: Llama4 is now available in torchtune! Try out our full and LoRA finetuning configs here
  • February 2025: Multi-node training is officially open for business in torchtune! Full finetune on multiple nodes to take advantage of larger batch sizes and models.
  • December 2024: torchtune now supports Llama 3.3 70B! Try it out by following our installation instructions here, then run any of the configs here.
  • November 2024: torchtune has released v0.4.0 which includes stable support for exciting features like activation offloading and multimodal QLoRA
  • November 2024: torchtune has added Gemma2 to its models!
  • October 2024: torchtune added support for Qwen2.5 models - find the configs here
  • September 2024: torchtune has support for Llama 3.2 11B Vision, Llama 3.2 3B, and Llama 3.2 1B models! Try them out by following our installation instructions here, then run any of the text configs here or vision configs here.

 

Overview 📚

torchtune is a PyTorch library for easily authoring, post-training, and experimenting with LLMs. It provides:

  • Hackable training recipes for SFT, knowledge distillation, DPO, PPO, GRPO, and quantization-aware training
  • Simple PyTorch implementations of popular LLMs like Llama, Gemma, Mistral, Phi, Qwen, and more
  • Best-in-class memory efficiency, performance improvements, and scaling, utilizing the latest PyTorch APIs
  • YAML configs for easily configuring training, evaluation, quantization or inference recipes

 

Post-training recipes

torchtune supports the entire post-training lifecycle. A successful post-trained model will likely utilize several of the below methods.

Supervised Finetuning (SFT)

| Type of Weight Update | 1 Device | >1 Device | >1 Node | |-----------------------|:--------:|:---------:|:-------:| | Full | ✅ | ✅ | ✅ | | LoRA/QLoRA | ✅ | ✅ | ✅ |

Example: tune run lora_finetune_single_device --config llama3_2/3B_lora_single_device
You can also run e.g. tune ls lora_finetune_single_device for a full list of available configs.

Knowledge Distillation (KD)

| Type of Weight Update | 1 Device | >1 Device | >1 Node | |-----------------------|:--------:|:---------:|:-------:| | Full | ❌ | ❌ | ❌ | | LoRA/QLoRA | ✅ | ✅ | ❌ |

Example: tune run knowledge_distillation_distributed --config qwen2/1.5B_to_0.5B_KD_lora_distributed
You can also run e.g. tune ls knowledge_distillation_distributed for a full list of available configs.

Reinforcement Learning / Reinforcement Learning from Human Feedback (RLHF)

| Method | Type of Weight Update | 1 Device | >1 Device | >1 Node | |------------------------------|-----------------------|:--------:|:---------:|:-------:| | DPO | Full | ❌ | ✅ | ❌ | | | LoRA/QLoRA | ✅ | ✅ | ❌ | | PPO | Full | ✅ | ❌ | ❌ | | | LoRA/QLoRA | ❌ | ❌ | ❌ | | GRPO | Full | 🚧 | ✅ | ✅ | | | LoRA/QLoRA | ❌ | ❌ | ❌ |

Example: tune run lora_dpo_single_device --config llama3_1/8B_dpo_single_device
You can also run e.g. tune ls full_dpo_distributed for a full list of available configs.

Quantization-Aware Training (QAT)

| Type of Weight Update | 1 Device | >1 Device | >1 Node | |-----------------------|:--------:|:---------:|:-------:| | Full | ✅ | ✅ | ❌ | | LoRA/QLoRA | ❌ | ✅ | ❌ |

Example: tune run qat_distributed --config llama3_1/8B_qat_lora
You can also run e.g. tune ls qat_distributed or tune ls qat_single_device for a full list of available configs.

The above configs are just examples to get you started. The full list of recipes can be found here. If you'd like to work on one of the gaps you see, please submit a PR! If there's a entirely new post-training method you'd like to see implemented in torchtune, feel free to open an Issue.

 

Models

For the above recipes, torchtune supports many state-of-the-art models available on the Hugging Face Hub or Kaggle Hub. Some of our supported models:

| Model | Sizes | |-----------------------------------------------|-----------| | Llama4 | Scout (17B x 16E) [models, configs] | | Llama3.3 | 70B [models, configs] | | Llama3.2-Vision | 11B, 90B [models, configs] | | Llama3.2 | 1B, 3B [models, configs] | | Llama3.1 | 8B, 70B, 405B [models, configs] | | Mistral | 7B [models, configs] | | Gemma2 | 2B, 9B, 27B [models, configs] | | Microsoft Phi4 | 14B [models, configs] | Microsoft Phi3 | Mini [models, configs] | Qwen3 | 0.6B, 1.7B, 4B, 8B, 14B, 32B [models, configs] | Qwen2.5 | 0.5B, 1.5B, 3B, 7B, 14B, 32B, 72B [models, configs] | Qwen2 | 0.5B, 1.5B, 7B [models, configs]

We're always adding new models, but feel free to file an issue if there's a new one you would like to see in torchtune.

 

Memory and training speed

Below is an example of the memory requirements and training speed for different Llama 3.1 models.

[!NOTE] For ease of comparison, all the below numbers are provided for batch size 2 (without gradient accumulation), a dataset packed to sequence length 2048, and torch compile enabled.

If you are interested in running on different hardware or with different models, check out our documentation on memory optimizations here to find the right setup for you.

| Model | Finetuning Method | Runnable On | Peak Memory per GPU | Tokens/sec * | |:-:|:-:|:-:|:-:|:-:| | Llama 3.1 8B | Full finetune | 1x 4090 | 18.9 GiB | 1650 | | Llama 3.1 8B | Full finetune | 1x A6000 | 37.4 GiB | 2579| | Llama 3.1 8B | LoRA | 1x 4090 | 16.2 GiB | 3083 | | Llama 3.1 8B | LoRA | 1x A6000 | 30.3 GiB | 4699 | | Llama 3.1 8B | QLoRA | 1x 4090 | 7.4 GiB | 2413 | | Llama 3.1 70B | Full finetune | 8x A100 | 13.9 GiB ** | 1568 | | Llama 3.1 70B | LoRA | 8x A100 | 27.6 GiB | 3497 | | Llama 3.1 405B | QLoRA | 8x A100 | 44.8 GB | 653 |

= Measured over one full training epoch
*
= Uses CPU offload with fused optimizer

 

Optimization flags

torchtune exposes a number of levers for memory efficiency and performance. The table below demonstrates the effects of applying some of these techniques sequentially to the Llama 3.2 3B model. Each technique is added on top of the previous one, except for LoRA and QLoRA, which do not use optimizer_in_bwd or AdamW8bit optimizer.

Baseline uses Recipe=fullfinetunesingle_device, Model=Llama 3.2 3B, Batch size=2, Max sequence length=4096, Precision=bf16, Hardware=A100

| Technique | Peak Memory Active (GiB) | % Change Memory vs Previous | Tokens Per Second | % Change Tokens/sec vs Previous| |:--|:-:|:-:|:-:|:-:| | Baseline | 25.5 | - | 2091 | - | | + Packed Dataset | 60.0 | +135.16% | 7075 | +238.40% | | + Compile | 51.0 | -14.93% | 8998 | +27.18% | | + Chunked Cross Entropy | 42.9 | -15.83% | 9174 | +1.96% | | + Activation Checkpointing | 24.9 | -41.93% | 7210 | -21.41% | | + Fuse optimizer step into backward | 23.1 | -7.29% | 7309 | +1.38% | | + Activation Offloading | 21.8 | -5.48% | 7301 | -0.11% | | + 8-bit AdamW | 17.6 | -19.63% | 6960 | -4.67% | | LoRA | 8.5 | -51.61% | 8210 | +17.96% | | QLoRA | 4.6 | -45.71% | 8035 | -2.13% |

The final row in the table vs baseline + Packed Dataset uses 81.9% less memory with a 284.3% increase in tokens per second.

Command to reproduce final row. ```bash tune run lora_finetune_single_device --config llama3_2/3B_qlora_single_device \ dataset.packed=True \ compile=True \ loss=torchtune.modules.loss.CEWithChunkedOutputLoss \ enable_activation_checkpointing=True \ optimizer_in_bwd=False \ enable_activation_offloading=True \ optimizer=torch.optim.AdamW \ tokenizer.max_seq_len=4096 \ gradient_accumulation_steps=1 \ epochs=1 \ batch_size=2 ```

 

Installation 🛠️

torchtune is only tested with the latest stable PyTorch release (currently 2.6.0) as well as the preview nightly version, and leverages torchvision for finetuning multimodal LLMs and torchao for the latest in quantization techniques; you should install these as well.

Install stable release

```bash

Install stable PyTorch, torchvision, torchao stable releases

pip install torch torchvision torchao pip install torchtune ```

Install nightly release

```bash

Install PyTorch, torchvision, torchao nightlies.

pip install --pre --upgrade torch torchvision torchao --index-url https://download.pytorch.org/whl/nightly/cu126 # full options are cpu/cu118/cu124/cu126/xpu/rocm6.2/rocm6.3/rocm6.4 pip install --pre --upgrade torchtune --extra-index-url https://download.pytorch.org/whl/nightly/cpu ```

You can also check out our install documentation for more information, including installing torchtune from source.

 

To confirm that the package is installed correctly, you can run the following command:

bash tune --help

And should see the following output:

```bash usage: tune [-h] {ls,cp,download,run,validate} ...

Welcome to the torchtune CLI!

options: -h, --help show this help message and exit

... ```

 

Get Started 🚀

To get started with torchtune, see our First Finetune Tutorial. Our End-to-End Workflow Tutorial will show you how to evaluate, quantize, and run inference with a Llama model. The rest of this section will provide a quick overview of these steps with Llama3.1.

Downloading a model

Follow the instructions on the official meta-llama repository to ensure you have access to the official Llama model weights. Once you have confirmed access, you can run the following command to download the weights to your local machine. This will also download the tokenizer model and a responsible use guide.

To download Llama3.1, you can run:

bash tune download meta-llama/Meta-Llama-3.1-8B-Instruct \ --output-dir /tmp/Meta-Llama-3.1-8B-Instruct \ --ignore-patterns "original/consolidated.00.pth" \ --hf-token <HF_TOKEN> \

[!Tip] Set your environment variable HF_TOKEN or pass in --hf-token to the command in order to validate your access. You can find your token at https://huggingface.co/settings/tokens

Running finetuning recipes

You can finetune Llama3.1 8B with LoRA on a single GPU using the following command:

bash tune run lora_finetune_single_device --config llama3_1/8B_lora_single_device

For distributed training, tune CLI integrates with torchrun. To run a full finetune of Llama3.1 8B on two GPUs:

bash tune run --nproc_per_node 2 full_finetune_distributed --config llama3_1/8B_full

[!Tip] Make sure to place any torchrun commands before the recipe specification. Any CLI args after this will override the config and not impact distributed training.

Modify Configs

There are two ways in which you can modify configs:

Config Overrides

You can directly overwrite config fields from the command line:

bash tune run lora_finetune_single_device \ --config llama2/7B_lora_single_device \ batch_size=8 \ enable_activation_checkpointing=True \ max_steps_per_epoch=128

Update a Local Copy

You can also copy the config to your local directory and modify the contents directly:

bash tune cp llama3_1/8B_full ./my_custom_config.yaml Copied to ./my_custom_config.yaml

Then, you can run your custom recipe by directing the tune run command to your local files:

bash tune run full_finetune_distributed --config ./my_custom_config.yaml

Check out tune --help for all possible CLI commands and options. For more information on using and updating configs, take a look at our config deep-dive.

Custom Datasets

torchtune supports finetuning on a variety of different datasets, including instruct-style, chat-style, preference datasets, and more. If you want to learn more about how to apply these components to finetune on your own custom dataset, please check out the provided links along with our API docs.

Custom Devices

torchtune supports finetuning on a variety of devices, including NVIDIA GPU, Intel XPU, AMD ROCm, Apple MPS, and Ascend NPU. If you're interested in running recipes on a custom device, such as Intel XPU, follow the steps below.

Step 1: Refer to the Getting Started on Intel GPU guide to configure your environment.

Step 2: Update device information via either CLI override or config changes. You can directly overwrite config fields from the command line:

bash tune run lora_finetune_single_device --config llama3_1/8B_lora_single_device device=xpu Or edit your local copy of configuration files and replace device: cuda with device: xpu

 

Community 🌍

torchtune focuses on integrating with popular tools and libraries from the ecosystem. These are just a few examples, with more under development:

 

Community Contributions

We really value our community and the contributions made by our wonderful users. We'll use this section to call out some of these contributions. If you'd like to help out as well, please see the CONTRIBUTING guide.

 

Acknowledgements 🙏

The transformer code in this repository is inspired by the original Llama2 code. We also want to give a huge shout-out to EleutherAI, Hugging Face and Weights & Biases for being wonderful collaborators and for working with us on some of these integrations within torchtune. In addition, we want to acknowledge some other awesome libraries and tools from the ecosystem:

  • gpt-fast for performant LLM inference techniques which we've adopted out-of-the-box
  • llama recipes for spring-boarding the llama2 community
  • bitsandbytes for bringing several memory and performance based techniques to the PyTorch ecosystem
  • @winglian and axolotl for early feedback and brainstorming on torchtune's design and feature set.
  • lit-gpt for pushing the LLM finetuning community forward.
  • HF TRL for making reward modeling more accessible to the PyTorch community.

 

Citing torchtune 📝

If you find the torchtune library useful, please cite it in your work as below.

bibtex @software{torchtune, title = {torchtune: PyTorch's finetuning library}, author = {torchtune maintainers and contributors}, url = {https//github.com/pytorch/torchtune}, license = {BSD-3-Clause}, month = apr, year = {2024} }

 

License

torchtune is released under the BSD 3 license. However you may have other legal obligations that govern your use of other content, such as the terms of service for third-party models.

Owner

  • Name: pytorch
  • Login: pytorch
  • Kind: organization
  • Location: where the eigens are valued

Citation (CITATION.cff)

cff-version: 1.2.0
title: "torchtune: PyTorch's post-training library"
message: "If you use this software, please cite it as below."
type: software
authors:
  - given-names: "torchtune maintainers and contributors"
url: "https//github.com/pytorch/torchtune"
license: "BSD-3-Clause"
date-released: "2024-04-14"

Committers

Last synced: 10 months ago

All Time
  • Total Commits: 1,248
  • Total Committers: 142
  • Avg Commits per committer: 8.789
  • Development Distribution Score (DDS): 0.857
Past Year
  • Commits: 740
  • Committers: 113
  • Avg Commits per committer: 6.549
  • Development Distribution Score (DDS): 0.868
Top Committers
Name Email Commits
Joe Cummings j****7@g****m 178
ebsmothers e****s@m****m 168
Rafi Ayub 3****A 120
Rohan Varma r****1@f****m 99
Salman Mohammadi s****i@o****m 87
Felipe Mello f****s@g****m 72
Kartikay Khandelwal 4****k 66
Philip Bontrager p****r@g****m 44
Mark 7****c 27
Danielle Pintz 3****z 23
andrewor14 a****4@g****m 19
Svetlana Karslioglu s****s@m****m 16
Botao Chen m****5@m****m 16
Nicolas Hug c****t@n****m 13
Gokul g****g@m****m 12
Jane (Yuan) Xu 3****9 11
Jerry Zhang j****8@g****m 10
Ankur Singh a****1@g****m 10
Thomas J. Fan t****n@g****m 9
acisseJZhong 4****g 9
Calvin Pelletier c****r@g****m 9
ankitageorge a****e@m****m 9
Wei (Will) Feng 1****y 8
Kartikay Khandelwal k****k@f****m 8
Mircea Mironenco m****o@g****m 7
Guoqiong Song g****g@i****m 5
Hardik Shah h****h@m****m 5
Linda Wang 8****g 5
Mark Saroufim m****m@m****m 5
Nathan Azrak 4****z 5
and 112 more...

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 885
  • Total pull requests: 1,687
  • Average time to close issues: about 1 month
  • Average time to close pull requests: 10 days
  • Total issue authors: 309
  • Total pull request authors: 210
  • Average comments per issue: 2.35
  • Average comments per pull request: 2.85
  • Merged pull requests: 1,084
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 523
  • Pull requests: 1,088
  • Average time to close issues: 29 days
  • Average time to close pull requests: 8 days
  • Issue authors: 188
  • Pull request authors: 145
  • Average comments per issue: 2.25
  • Average comments per pull request: 2.83
  • Merged pull requests: 683
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • joecummings (117)
  • SalmanMohammadi (65)
  • felipemello1 (60)
  • RdoubleA (57)
  • ebsmothers (34)
  • rohan-varma (31)
  • pbontrager (23)
  • krammnic (16)
  • kailashg26 (15)
  • Vattikondadheeraj (14)
  • EugenHotaj (12)
  • bogdansalyp (10)
  • nathan-az (9)
  • kartikayk (9)
  • Optimox (9)
Pull Request Authors
  • joecummings (293)
  • ebsmothers (254)
  • felipemello1 (204)
  • RdoubleA (187)
  • SalmanMohammadi (185)
  • pbontrager (86)
  • krammnic (79)
  • rohan-varma (76)
  • ankitageorge (47)
  • andrewor14 (38)
  • kartikayk (34)
  • nathan-az (25)
  • thomasjpfan (24)
  • janeyx99 (24)
  • calvinpelletier (24)
Top Labels
Issue Labels
community help wanted (82) bug (59) enhancement (46) better engineering (45) good first issue (39) discussion (31) documentation (28) best practice (26) testing (14) CLA Signed (12) rfc (11) triaged (10) high-priority (9) question (9) inference (4) wontfix (2) triage review (2) distributed (2) rlhf (1) startup error (1) help wanted (1)
Pull Request Labels
CLA Signed (2,026) fb-exported (33) ci-no-td (10) rfc (8) documentation (4) bug (3) enhancement (2) testing (2) rlhf (2) wontfix (2) distributed (1) question (1) module: rocm (1) ciflow/rocm (1) triage review (1)

Packages

  • Total packages: 5
  • Total downloads:
    • pypi 696,494 last-month
  • Total dependent packages: 0
    (may contain duplicates)
  • Total dependent repositories: 0
    (may contain duplicates)
  • Total versions: 60
  • Total maintainers: 8
proxy.golang.org: github.com/pytorch/torchtune
  • Versions: 46
  • Dependent Packages: 0
  • Dependent Repositories: 0
Rankings
Dependent packages count: 6.5%
Average: 6.7%
Dependent repos count: 7.0%
Last synced: 6 months ago
pypi.org: forked-torchtune

A native-PyTorch library for LLM fine-tuning

  • Documentation: https://pytorch.org/torchtune/main/index.html
  • License: BSD 3-Clause License Copyright 2024 Meta Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice,this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
  • Latest release: 0.0.0
    published over 1 year ago
  • Versions: 1
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 6 Last month
Rankings
Dependent packages count: 10.8%
Average: 36.0%
Dependent repos count: 61.1%
Maintainers (1)
Last synced: 6 months ago
pypi.org: torchchat

Package for finetuning LLMs using native PyTorch

  • Versions: 1
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 15 Last month
Rankings
Dependent packages count: 9.6%
Average: 36.4%
Dependent repos count: 63.1%
Maintainers (3)
Last synced: 6 months ago
pypi.org: torchat

Package for finetuning LLMs using native PyTorch

  • Versions: 1
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 8 Last month
Rankings
Dependent packages count: 9.6%
Average: 36.4%
Dependent repos count: 63.2%
Maintainers (1)
Last synced: 6 months ago
pypi.org: torchtune

A native-PyTorch library for LLM fine-tuning

  • Documentation: https://pytorch.org/torchtune/main/index.html
  • License: BSD 3-Clause License Copyright 2024 Meta Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice,this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
  • Latest release: 0.6.1
    published 11 months ago
  • Versions: 11
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 696,465 Last month
Rankings
Dependent packages count: 9.7%
Average: 36.7%
Dependent repos count: 63.8%
Last synced: 6 months ago

Dependencies

.github/workflows/build_docs.yaml actions
  • actions/checkout v3 composite
  • actions/download-artifact v3 composite
  • actions/upload-artifact v3 composite
  • conda-incubator/setup-miniconda v2 composite
.github/workflows/lint.yaml actions
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
  • tj-actions/changed-files v41.0.0 composite
.github/workflows/recipe_test.yaml actions
  • actions/checkout v3 composite
  • aws-actions/configure-aws-credentials v1.7.0 composite
  • codecov/codecov-action v3 composite
  • conda-incubator/setup-miniconda v2 composite
  • nick-fields/retry v2 composite
.github/workflows/unit_test.yaml actions
  • actions/checkout v3 composite
  • codecov/codecov-action v3 composite
  • conda-incubator/setup-miniconda v2 composite
docs/requirements.txt pypi
  • matplotlib *
  • sphinx ==5.0.0
  • sphinx-gallery >0.11
  • sphinx-tabs *
  • sphinx_copybutton *
  • sphinx_design *
pyproject.toml pypi
.github/workflows/build_linux_wheels.yaml actions
.github/workflows/recipe_test_multi_gpu.yaml actions
  • actions/checkout v3 composite
  • aws-actions/configure-aws-credentials v1.7.0 composite
  • codecov/codecov-action v3 composite
  • conda-incubator/setup-miniconda v2 composite
.github/workflows/recipe_test_nightly.yaml actions
  • actions/checkout v3 composite
  • codecov/codecov-action v3 composite
  • conda-incubator/setup-miniconda v2 composite
.github/workflows/regression_test.yaml actions
  • actions/checkout v3 composite
  • aws-actions/configure-aws-credentials v1.7.0 composite
  • codecov/codecov-action v3 composite
  • conda-incubator/setup-miniconda v2 composite