pytorch-lightning

Pretrain, finetune ANY AI model of ANY size on multiple GPUs, TPUs with zero code changes.

https://github.com/lightning-ai/pytorch-lightning

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Committers with academic emails
    48 of 977 committers (4.9%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (14.5%) to scientific vocabulary

Keywords

ai artificial-intelligence data-science deep-learning machine-learning python pytorch

Keywords from Contributors

transformer jax cryptocurrency cryptography analyses closember distributed langchain mlops spatial-ai
Last synced: 6 months ago · JSON representation ·

Repository

Pretrain, finetune ANY AI model of ANY size on multiple GPUs, TPUs with zero code changes.

Basic Info
Statistics
  • Stars: 30,048
  • Watchers: 254
  • Forks: 3,556
  • Open Issues: 975
  • Releases: 0
Topics
ai artificial-intelligence data-science deep-learning machine-learning python pytorch
Created almost 7 years ago · Last pushed 6 months ago
Metadata Files
Readme Contributing License Code of conduct Citation Codeowners Security

README.md

Lightning

**The deep learning framework to pretrain, finetune and deploy AI models.** **NEW- Deploying models? Check out [LitServe](https://github.com/Lightning-AI/litserve), the PyTorch Lightning for model serving** ______________________________________________________________________

Quick startExamplesPyTorch LightningFabricLightning AICommunityDocs

[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/pytorch-lightning)](https://pypi.org/project/pytorch-lightning/) [![PyPI Status](https://badge.fury.io/py/pytorch-lightning.svg)](https://badge.fury.io/py/pytorch-lightning) [![PyPI - Downloads](https://img.shields.io/pypi/dm/pytorch-lightning)](https://pepy.tech/project/pytorch-lightning) [![Conda](https://img.shields.io/conda/v/conda-forge/lightning?label=conda&color=success)](https://anaconda.org/conda-forge/lightning) [![codecov](https://codecov.io/gh/Lightning-AI/pytorch-lightning/graph/badge.svg?token=SmzX8mnKlA)](https://codecov.io/gh/Lightning-AI/pytorch-lightning) [![Discord](https://img.shields.io/discord/1077906959069626439?style=plastic)](https://discord.gg/VptPCZkGNa) ![GitHub commit activity](https://img.shields.io/github/commit-activity/w/lightning-ai/lightning) [![license](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/Lightning-AI/pytorch-lightning/blob/master/LICENSE)

  Get started

 

Why PyTorch Lightning?

Training models in plain PyTorch is tedious and error-prone - you have to manually handle things like backprop, mixed precision, multi-GPU, and distributed training, often rewriting code for every new project. PyTorch Lightning organizes PyTorch code to automate those complexities so you can focus on your model and data, while keeping full control and scaling from CPU to multi-node without changing your core code. But if you want control of those things, you can still opt into more DIY.

Fun analogy: If PyTorch is Javascript, PyTorch Lightning is ReactJS or NextJS.

Lightning has 2 core packages

PyTorch Lightning: Train and deploy PyTorch at scale.
Lightning Fabric: Expert control.

Lightning gives you granular control over how much abstraction you want to add over PyTorch.

 

Quick start

Install Lightning:

bash pip install lightning

Advanced install options #### Install with optional dependencies ```bash pip install lightning['extra'] ``` #### Conda ```bash conda install lightning -c conda-forge ``` #### Install stable version Install future release from the source ```bash pip install https://github.com/Lightning-AI/lightning/archive/refs/heads/release/stable.zip -U ``` #### Install bleeding-edge Install nightly from the source (no guarantees) ```bash pip install https://github.com/Lightning-AI/lightning/archive/refs/heads/master.zip -U ``` or from testing PyPI ```bash pip install -iU https://test.pypi.org/simple/ pytorch-lightning ```

PyTorch Lightning example

Define the training workflow. Here's a toy example (explore real examples):

```python

main.py

! pip install torchvision

import torch, torch.nn as nn, torch.utils.data as data, torchvision as tv, torch.nn.functional as F import lightning as L

--------------------------------

Step 1: Define a LightningModule

--------------------------------

A LightningModule (nn.Module subclass) defines a full system

(ie: an LLM, diffusion model, autoencoder, or simple image classifier).

class LitAutoEncoder(L.LightningModule): def init(self): super().init() self.encoder = nn.Sequential(nn.Linear(28 * 28, 128), nn.ReLU(), nn.Linear(128, 3)) self.decoder = nn.Sequential(nn.Linear(3, 128), nn.ReLU(), nn.Linear(128, 28 * 28))

def forward(self, x):
    # in lightning, forward defines the prediction/inference actions
    embedding = self.encoder(x)
    return embedding

def training_step(self, batch, batch_idx):
    # training_step defines the train loop. It is independent of forward
    x, _ = batch
    x = x.view(x.size(0), -1)
    z = self.encoder(x)
    x_hat = self.decoder(z)
    loss = F.mse_loss(x_hat, x)
    self.log("train_loss", loss)
    return loss

def configure_optimizers(self):
    optimizer = torch.optim.Adam(self.parameters(), lr=1e-3)
    return optimizer

-------------------

Step 2: Define data

-------------------

dataset = tv.datasets.MNIST(".", download=True, transform=tv.transforms.ToTensor()) train, val = data.random_split(dataset, [55000, 5000])

-------------------

Step 3: Train

-------------------

autoencoder = LitAutoEncoder() trainer = L.Trainer() trainer.fit(autoencoder, data.DataLoader(train), data.DataLoader(val)) ```

Run the model on your terminal

bash pip install torchvision python main.py

 

Why PyTorch Lightning?

PyTorch Lightning is just organized PyTorch - Lightning disentangles PyTorch code to decouple the science from the engineering.

PT to PL

 


Examples

Explore various types of training possible with PyTorch Lightning. Pretrain and finetune ANY kind of model to perform ANY task like classification, segmentation, summarization and more:

| Task | Description | Run | |-------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------|---| | Hello world | Pretrain - Hello world example | Open In Studio | | Image classification | Finetune - ResNet-34 model to classify images of cars | Open In Studio |
| Image segmentation | Finetune - ResNet-50 model to segment images | Open In Studio |
| Object detection | Finetune - Faster R-CNN model to detect objects | Open In Studio | | Text classification | Finetune - text classifier (BERT model) | Open In Studio |
| Text summarization | Finetune - text summarization (Hugging Face transformer model) | Open In Studio |
| Audio generation | Finetune - audio generator (transformer model) | Open In Studio |
| LLM finetuning | Finetune - LLM (Meta Llama 3.1 8B) | Open In Studio | | Image generation | Pretrain - Image generator (diffusion model) | Open In Studio | | Recommendation system | Train - recommendation system (factorization and embedding) | Open In Studio | | Time-series forecasting | Train - Time-series forecasting with LSTM | Open In Studio |


Advanced features

Lightning has over 40+ advanced features designed for professional AI research at scale.

Here are some examples:

Train on 1000s of GPUs without code changes ```python # 8 GPUs # no code changes needed trainer = Trainer(accelerator="gpu", devices=8) # 256 GPUs trainer = Trainer(accelerator="gpu", devices=8, num_nodes=32) ```
Train on other accelerators like TPUs without code changes ```python # no code changes needed trainer = Trainer(accelerator="tpu", devices=8) ```
16-bit precision ```python # no code changes needed trainer = Trainer(precision=16) ```
Experiment managers ```python from lightning import loggers # tensorboard trainer = Trainer(logger=TensorBoardLogger("logs/")) # weights and biases trainer = Trainer(logger=loggers.WandbLogger()) # comet trainer = Trainer(logger=loggers.CometLogger()) # mlflow trainer = Trainer(logger=loggers.MLFlowLogger()) # neptune trainer = Trainer(logger=loggers.NeptuneLogger()) # ... and dozens more ```
Early Stopping ```python es = EarlyStopping(monitor="val_loss") trainer = Trainer(callbacks=[es]) ```
Checkpointing ```python checkpointing = ModelCheckpoint(monitor="val_loss") trainer = Trainer(callbacks=[checkpointing]) ```
Export to torchscript (JIT) (production use) ```python # torchscript autoencoder = LitAutoEncoder() torch.jit.save(autoencoder.to_torchscript(), "model.pt") ```
Export to ONNX (production use) ```python # onnx with tempfile.NamedTemporaryFile(suffix=".onnx", delete=False) as tmpfile: autoencoder = LitAutoEncoder() input_sample = torch.randn((1, 64)) autoencoder.to_onnx(tmpfile.name, input_sample, export_params=True) os.path.isfile(tmpfile.name) ```
______________________________________________________________________ ## Advantages over unstructured PyTorch - Models become hardware agnostic - Code is clear to read because engineering code is abstracted away - Easier to reproduce - Make fewer mistakes because lightning handles the tricky engineering - Keeps all the flexibility (LightningModules are still PyTorch modules), but removes a ton of boilerplate - Lightning has dozens of integrations with popular machine learning tools. - [Tested rigorously with every new PR](https://github.com/Lightning-AI/lightning/tree/master/tests). We test every combination of PyTorch and Python supported versions, every OS, multi GPUs and even TPUs. - Minimal running speed overhead (about 300 ms per epoch compared with pure PyTorch). ______________________________________________________________________
Read the PyTorch Lightning docs

   

Lightning Fabric: Expert control

Run on any device at any scale with expert-level control over PyTorch training loop and scaling strategy. You can even write your own Trainer.

Fabric is designed for the most complex models like foundation model scaling, LLMs, diffusion, transformers, reinforcement learning, active learning. Of any size.

What to change Resulting Fabric Code (copy me!)
```diff + import lightning as L import torch; import torchvision as tv dataset = tv.datasets.CIFAR10("data", download=True, train=True, transform=tv.transforms.ToTensor()) + fabric = L.Fabric() + fabric.launch() model = tv.models.resnet18() optimizer = torch.optim.SGD(model.parameters(), lr=0.001) - device = "cuda" if torch.cuda.is_available() else "cpu" - model.to(device) + model, optimizer = fabric.setup(model, optimizer) dataloader = torch.utils.data.DataLoader(dataset, batch_size=8) + dataloader = fabric.setup_dataloaders(dataloader) model.train() num_epochs = 10 for epoch in range(num_epochs): for batch in dataloader: inputs, labels = batch - inputs, labels = inputs.to(device), labels.to(device) optimizer.zero_grad() outputs = model(inputs) loss = torch.nn.functional.cross_entropy(outputs, labels) - loss.backward() + fabric.backward(loss) optimizer.step() print(loss.data) ``` ```Python import lightning as L import torch; import torchvision as tv dataset = tv.datasets.CIFAR10("data", download=True, train=True, transform=tv.transforms.ToTensor()) fabric = L.Fabric() fabric.launch() model = tv.models.resnet18() optimizer = torch.optim.SGD(model.parameters(), lr=0.001) model, optimizer = fabric.setup(model, optimizer) dataloader = torch.utils.data.DataLoader(dataset, batch_size=8) dataloader = fabric.setup_dataloaders(dataloader) model.train() num_epochs = 10 for epoch in range(num_epochs): for batch in dataloader: inputs, labels = batch optimizer.zero_grad() outputs = model(inputs) loss = torch.nn.functional.cross_entropy(outputs, labels) fabric.backward(loss) optimizer.step() print(loss.data) ```

Key features

Easily switch from running on CPU to GPU (Apple Silicon, CUDA, …), TPU, multi-GPU or even multi-node training ```python # Use your available hardware # no code changes needed fabric = Fabric() # Run on GPUs (CUDA or MPS) fabric = Fabric(accelerator="gpu") # 8 GPUs fabric = Fabric(accelerator="gpu", devices=8) # 256 GPUs, multi-node fabric = Fabric(accelerator="gpu", devices=8, num_nodes=32) # Run on TPUs fabric = Fabric(accelerator="tpu") ```
Use state-of-the-art distributed training strategies (DDP, FSDP, DeepSpeed) and mixed precision out of the box ```python # Use state-of-the-art distributed training techniques fabric = Fabric(strategy="ddp") fabric = Fabric(strategy="deepspeed") fabric = Fabric(strategy="fsdp") # Switch the precision fabric = Fabric(precision="16-mixed") fabric = Fabric(precision="64") ```
All the device logic boilerplate is handled for you ```diff # no more of this! - model.to(device) - batch.to(device) ```
Build your own custom Trainer using Fabric primitives for training checkpointing, logging, and more ```python import lightning as L class MyCustomTrainer: def __init__(self, accelerator="auto", strategy="auto", devices="auto", precision="32-true"): self.fabric = L.Fabric(accelerator=accelerator, strategy=strategy, devices=devices, precision=precision) def fit(self, model, optimizer, dataloader, max_epochs): self.fabric.launch() model, optimizer = self.fabric.setup(model, optimizer) dataloader = self.fabric.setup_dataloaders(dataloader) model.train() for epoch in range(max_epochs): for batch in dataloader: input, target = batch optimizer.zero_grad() output = model(input) loss = loss_fn(output, target) self.fabric.backward(loss) optimizer.step() ``` You can find a more extensive example in our [examples](examples/fabric/build_your_own_trainer)

Read the Lightning Fabric docs

   

Examples

Self-supervised Learning
Convolutional Architectures
Reinforcement Learning
GANs
Classic ML

   

Continuous Integration

Lightning is rigorously tested across multiple CPUs, GPUs and TPUs and against major Python and PyTorch versions.

*Codecov is > 90%+ but build delays may show less
Current build statuses
| System / PyTorch ver. | 1.13 | 2.0 | 2.1 | | :--------------------------------: | :-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| | Linux py3.9 \[GPUs\] | | | [![Build Status](https://dev.azure.com/Lightning-AI/lightning/_apis/build/status%2Fpytorch-lightning%20%28GPUs%29?branchName=master)](https://dev.azure.com/Lightning-AI/lightning/_build/latest?definitionId=24&branchName=master) | | Linux (multiple Python versions) | [![Test PyTorch](https://github.com/Lightning-AI/lightning/actions/workflows/ci-tests-pytorch.yml/badge.svg)](https://github.com/Lightning-AI/lightning/actions/workflows/ci-tests-pytorch.yml) | [![Test PyTorch](https://github.com/Lightning-AI/lightning/actions/workflows/ci-tests-pytorch.yml/badge.svg)](https://github.com/Lightning-AI/lightning/actions/workflows/ci-tests-pytorch.yml) | [![Test PyTorch](https://github.com/Lightning-AI/lightning/actions/workflows/ci-tests-pytorch.yml/badge.svg)](https://github.com/Lightning-AI/lightning/actions/workflows/ci-tests-pytorch.yml) | | OSX (multiple Python versions) | [![Test PyTorch](https://github.com/Lightning-AI/lightning/actions/workflows/ci-tests-pytorch.yml/badge.svg)](https://github.com/Lightning-AI/lightning/actions/workflows/ci-tests-pytorch.yml) | [![Test PyTorch](https://github.com/Lightning-AI/lightning/actions/workflows/ci-tests-pytorch.yml/badge.svg)](https://github.com/Lightning-AI/lightning/actions/workflows/ci-tests-pytorch.yml) | [![Test PyTorch](https://github.com/Lightning-AI/lightning/actions/workflows/ci-tests-pytorch.yml/badge.svg)](https://github.com/Lightning-AI/lightning/actions/workflows/ci-tests-pytorch.yml) | | Windows (multiple Python versions) | [![Test PyTorch](https://github.com/Lightning-AI/lightning/actions/workflows/ci-tests-pytorch.yml/badge.svg)](https://github.com/Lightning-AI/lightning/actions/workflows/ci-tests-pytorch.yml) | [![Test PyTorch](https://github.com/Lightning-AI/lightning/actions/workflows/ci-tests-pytorch.yml/badge.svg)](https://github.com/Lightning-AI/lightning/actions/workflows/ci-tests-pytorch.yml) | [![Test PyTorch](https://github.com/Lightning-AI/lightning/actions/workflows/ci-tests-pytorch.yml/badge.svg)](https://github.com/Lightning-AI/lightning/actions/workflows/ci-tests-pytorch.yml) |

   

Community

The lightning community is maintained by

  • 10+ core contributors who are all a mix of professional engineers, Research Scientists, and Ph.D. students from top AI labs.
  • 800+ community contributors.

Want to help us build Lightning and reduce boilerplate for thousands of researchers? Learn how to make your first contribution here

Lightning is also part of the PyTorch ecosystem which requires projects to have solid testing, documentation and support.

Asking for help

If you have any questions please:

  1. Read the docs.
  2. Search through existing Discussions, or add a new question
  3. Join our discord.

Owner

  • Name: ⚡️ Lightning AI
  • Login: Lightning-AI
  • Kind: organization
  • Location: United States of America

Turn ideas into AI, Lightning fast. Creators of PyTorch Lightning, Lightning AI Studio, TorchMetrics, Fabric, Lit-GPT, Lit-LLaMA

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you want to cite the framework, feel free to use this (but only if you loved it 😊)"
title: "PyTorch Lightning"
abstract: "The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate."
date-released: 2019-03-30
authors:
  - family-names: "Falcon"
    given-names: "William"
  - name: "The PyTorch Lightning team"
version: 1.4
doi: 10.5281/zenodo.3828935
license: "Apache-2.0"
url: "https://www.pytorchlightning.ai"
repository-code: "https://github.com/Lightning-AI/lightning"
keywords:
  - machine learning
  - deep learning
  - artificial intelligence

Committers

Last synced: 9 months ago

All Time
  • Total Commits: 10,514
  • Total Committers: 977
  • Avg Commits per committer: 10.762
  • Development Distribution Score (DDS): 0.787
Past Year
  • Commits: 315
  • Committers: 85
  • Avg Commits per committer: 3.706
  • Development Distribution Score (DDS): 0.816
Top Committers
Name Email Commits
William Falcon w****7@c****u 2,238
Jirka Borovec B****a 1,339
Adrian Wälchli a****i@g****m 1,218
Carlos Mocholí c****i@g****m 1,014
thomas chaton t****s@g****i 412
dependabot[bot] 4****] 340
Rohit Gupta r****8@g****m 306
Kaushik B 4****1 218
Sean Naren s****n@g****i 151
ananthsub a****m@g****m 149
Akihiro Nitta n****a@a****m 144
Ethan Harris e****s@g****m 113
Justus Schock 1****k 101
Danielle Pintz 3****z 83
Nicki Skafte s****i@g****m 76
edenlightning 6****g 65
Sean Naren s****n@g****m 61
Mauricio Villegas m****e@y****m 58
Luca Antiga l****a@g****m 45
otaj 6****j 44
Jeff Yang y****f@o****m 43
four4fish 8****h 42
Adrian Wälchli a****i@s****h 33
PL Ghost 7****t 31
Victor Prins v****s@o****m 24
jjenniferdai 8****i 24
Nic Eggert n****c@e****o 24
Sherin Thomas s****n@l****i 24
Kushashwa Ravi Shrimali k****i@g****m 23
Krishna Kalyan k****3@g****m 22
and 947 more...

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 1,082
  • Total pull requests: 1,427
  • Average time to close issues: 5 months
  • Average time to close pull requests: 24 days
  • Total issue authors: 832
  • Total pull request authors: 237
  • Average comments per issue: 3.34
  • Average comments per pull request: 1.7
  • Merged pull requests: 974
  • Bot issues: 0
  • Bot pull requests: 207
Past Year
  • Issues: 348
  • Pull requests: 692
  • Average time to close issues: about 1 month
  • Average time to close pull requests: 8 days
  • Issue authors: 299
  • Pull request authors: 118
  • Average comments per issue: 0.62
  • Average comments per pull request: 1.36
  • Merged pull requests: 464
  • Bot issues: 0
  • Bot pull requests: 175
Top Authors
Issue Authors
  • awaelchli (23)
  • williamFalcon (14)
  • carmocca (11)
  • adosar (8)
  • clumsy (8)
  • heth27 (6)
  • Borda (6)
  • loretoparisi (5)
  • svnv-svsv-jm (5)
  • JohnHerry (5)
  • tchaton (5)
  • profPlum (5)
  • YuyaWake (5)
  • Peiffap (4)
  • Yann-CV (4)
Pull Request Authors
  • awaelchli (238)
  • Borda (206)
  • dependabot[bot] (200)
  • tchaton (61)
  • pl-ghost (56)
  • lantiga (45)
  • williamFalcon (27)
  • fnhirwa (21)
  • carmocca (18)
  • mauvilsa (16)
  • 01AbhiSingh (15)
  • SkafteNicki (15)
  • KAVYANSHTYAGI (14)
  • clumsy (11)
  • matsumotosan (10)
Top Labels
Issue Labels
needs triage (597) bug (585) feature (250) help wanted (120) docs (95) question (81) ver: 2.5.x (53) ver: 2.1.x (51) ver: 2.2.x (50) ver: 2.0.x (47) ver: 2.4.x (38) won't fix (35) refactor (25) repro needed (25) good first issue (25) app (23) 3rd party (21) priority: 1 (17) lightningcli (17) data handling (16) strategy: ddp (15) fabric (14) waiting on author (13) strategy: deepspeed (13) logging (13) checkpointing (13) pl (12) working as intended (11) priority: 0 (11) tuner (9)
Pull Request Labels
pl (716) ci (432) fabric (401) ready (321) docs (307) dependencies (272) package (105) community (102) bug (68) data (65) app (61) tests (61) fun (60) feature (54) has conflicts (54) dockers (53) checkpointing (38) release (34) won't fix (32) waiting on author (26) refactor (24) examples (23) code quality (19) example (18) logger (16) strategy: fsdp (14) run TPU (13) 3rd party (13) priority: 0 (9) breaking change (9)

Packages

  • Total packages: 6
  • Total downloads:
    • pypi 14,090,710 last-month
  • Total docker downloads: 27,942,303
  • Total dependent packages: 834
    (may contain duplicates)
  • Total dependent repositories: 10,072
    (may contain duplicates)
  • Total versions: 468
  • Total maintainers: 4
  • Total advisories: 6
pypi.org: pytorch-lightning

PyTorch Lightning is the lightweight PyTorch wrapper for ML researchers. Scale your models. Write less boilerplate.

  • Versions: 215
  • Dependent Packages: 574
  • Dependent Repositories: 9,165
  • Downloads: 9,679,535 Last month
  • Docker Downloads: 26,469,608
Rankings
Dependent packages count: 0.0%
Stargazers count: 0.1%
Dependent repos count: 0.1%
Downloads: 0.1%
Forks count: 0.2%
Average: 0.2%
Docker downloads count: 0.6%
Maintainers (2)
Last synced: 6 months ago
pypi.org: lightning

The Deep Learning framework to train, deploy, and ship AI products Lightning fast.

  • Versions: 157
  • Dependent Packages: 251
  • Dependent Repositories: 903
  • Downloads: 4,275,802 Last month
  • Docker Downloads: 1,472,624
Rankings
Stargazers count: 0.1%
Forks count: 0.2%
Average: 0.4%
Dependent packages count: 0.4%
Dependent repos count: 0.4%
Downloads: 0.4%
Docker downloads count: 0.8%
Maintainers (3)
Last synced: 6 months ago
pypi.org: lightning-fabric
  • Versions: 51
  • Dependent Packages: 7
  • Dependent Repositories: 4
  • Downloads: 135,373 Last month
  • Docker Downloads: 71
Rankings
Stargazers count: 0.1%
Forks count: 0.2%
Downloads: 0.8%
Dependent packages count: 1.4%
Average: 2.3%
Docker downloads count: 4.1%
Dependent repos count: 7.5%
Maintainers (2)
Last synced: 6 months ago
spack.io: py-lightning

The deep learning framework to pretrain, finetune and deploy AI models.

  • Versions: 41
  • Dependent Packages: 0
  • Dependent Repositories: 0
Rankings
Dependent repos count: 0.0%
Average: 28.6%
Dependent packages count: 57.3%
Maintainers (1)
Last synced: 6 months ago
anaconda.org: lightning-cloud

Lightning AI Command Line Interface

  • Versions: 1
  • Dependent Packages: 1
  • Dependent Repositories: 0
Rankings
Dependent packages count: 51.0%
Average: 55.4%
Dependent repos count: 59.9%
Last synced: 6 months ago
anaconda.org: lightning

Use Lightning Apps to build everything from production-ready, multi-cloud ML systems to simple research demos.

  • Versions: 3
  • Dependent Packages: 1
  • Dependent Repositories: 0
Rankings
Dependent packages count: 51.0%
Average: 55.4%
Dependent repos count: 59.9%
Last synced: 6 months ago