fasterai
FasterAI: Prune and Distill your models with FastAI and PyTorch
Science Score: 57.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 3 DOI reference(s) in README -
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (12.1%) to scientific vocabulary
Keywords
Repository
FasterAI: Prune and Distill your models with FastAI and PyTorch
Basic Info
- Host: GitHub
- Owner: FasterAI-Labs
- License: apache-2.0
- Language: Jupyter Notebook
- Default Branch: master
- Homepage: https://fasterai-labs.com
- Size: 35 MB
Statistics
- Stars: 248
- Watchers: 5
- Forks: 19
- Open Issues: 3
- Releases: 5
Topics
Metadata Files
README.md
Features • Installation • Tutorials • Community • Citing • License
Overview
fasterAI is a PyTorch-based library that makes neural networks smaller, faster, and more efficient through state-of-the-art compression techniques. The library provides simple but powerful implementations of pruning, knowledge distillation, quantization, and other network optimization methods that can be applied with just a few lines of code.
Why compress your models with fasterai?
- Reduce model size by up to 90% with minimal accuracy loss
- Speed up inference for deployment on edge devices
- Lower energy consumption for more sustainable AI
- Simplify architectures while maintaining performance
Features
1. Sparsification
Make your model sparse by replacing selected weights with zeros using Sparsifier or SparsifyCallback.
|Parameter|Description|Options|
|---|---|---|
|sparsity|Percentage of weights to zero out|0-100%|
|granularity|Level at which to apply sparsity|'weight', 'vector', 'kernel', 'filter'|
|context|Scope of sparsification|'local' (per layer), 'global' (whole model)|
|criteria|Method to select weights|'magnitude', 'movement', 'gradient', etc.|
|schedule|How sparsity evolves during training|'one_shot', 'iterative', 'gradual', etc.|
2. Pruning
Remove zero-weight nodes from your network structure using Pruner or PruneCallback.
|Parameter|Description|Options|
|---|---|---|
|pruning_ratio|Percentage of weights to remove|0-100%|
|context|Scope of sparsification|'local' (per layer), 'global' (whole model)|
|criteria|Method to select weights|'magnitude', 'movement', 'gradient', etc.|
|schedule|How sparsity evolves during training|'one_shot', 'iterative', 'gradual', etc.|
3. Knowledge Distillation
Transfer knowledge from a large teacher to a smaller student using KnowledgeDistillationCallback.
|Parameter|Description|Options|
|---|---|---|
|teacher|Teacher model|Any PyTorch model|
|loss|Distillation loss function|'SoftTarget', 'Logits', 'Attention', etc.|
|activations_student|Student layers to match|Layer names as strings|
|activations_teacher|Teacher layers to match|Layer names as strings|
|weight|Balancing weight for distillation|0.0-1.0|
4. Regularization
Push weights toward zero during training using RegularizeCallback.
|Parameter|Description|Options|
|---|---|---|
|criteria|Regularization criteria|Same as sparsification criteria|
|granularity|Level of regularization|Same as sparsification granularity|
|weight|Regularization strength|Floating point value|
|schedule|How sparsity evolves during training|'oneshot', 'iterative', 'gradual', etc.|
|`layertypes`|Layer types to regularize|'nn.Conv2d', 'nn.Linear', etc.|
5. Quantization
Reduce the precision of weights and activations using Quantizer or QuantizeCallback.
| Parameter | Description | Options |
| ---------------- | ----------------------------- | -------------------------- |
| backend | Target backend | 'x86', 'qnnpack' |
| method | Quantization method | 'static', 'dynamic', 'qat' |
| use_per_tensor | Force per-tensor quantization | True/False |
Quick Start
This is how easy it is to induce Sparsification in your PyTorch model:
```python from fasterai.sparse.all import *
learn = visionlearner(dls, model) learn.fitonecycle(nepochs, cbs=SparsifyCallback(sparsity, granularity, context, criteria, schedule)) ```
Installation
pip install git+https://github.com/FasterAI-Labs/fasterai.git
or
pip install fasterai
Tutorials
- Get Started with FasterAI
- Create your own pruning schedule
- Find winning tickets using the Lottery Ticket Hypothesis
- Use Knowledge Distillation to help a student model to reach higher performance
- Sparsify Transformers
- Many more !
Join the community
Join our discord server to meet other FasterAI users and share your projects!
Citing
@software{Hubens,
author = {Nathan Hubens},
title = {fasterai},
year = 2022,
publisher = {Zenodo},
version = {v0.1.6},
doi = {10.5281/zenodo.6469868},
url = {https://doi.org/10.5281/zenodo.6469868}
}
License
Apache-2.0 License.
Owner
- Name: FasterAI-Labs
- Login: FasterAI-Labs
- Kind: organization
- Repositories: 1
- Profile: https://github.com/FasterAI-Labs
Citation (CITATION.cff)
cff-version: 1.2.0 message: "If you use this software, please cite it as below." authors: - family-names: "Hubens" given-names: "Nathan" title: "fasterai" version: 0.1.8 doi: 10.5281/zenodo.6469868 date-released: 2022-06-06 license: "Apache-2.0" url: "https://nathanhubens.github.io/fasterai/" repository-code: "https://github.com/nathanhubens/fasterai" keywords: - machine learning - deep learning - artificial intelligence - pruning - knowledge distillation - compression
GitHub Events
Total
- Create event: 3
- Issues event: 1
- Release event: 1
- Watch event: 4
- Issue comment event: 2
- Push event: 33
- Pull request event: 5
- Fork event: 1
Last Year
- Create event: 3
- Issues event: 1
- Release event: 1
- Watch event: 4
- Issue comment event: 2
- Push event: 33
- Pull request event: 5
- Fork event: 1
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 1
- Total pull requests: 3
- Average time to close issues: N/A
- Average time to close pull requests: about 9 hours
- Total issue authors: 1
- Total pull request authors: 2
- Average comments per issue: 0.0
- Average comments per pull request: 0.33
- Merged pull requests: 2
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 1
- Pull requests: 3
- Average time to close issues: N/A
- Average time to close pull requests: about 9 hours
- Issue authors: 1
- Pull request authors: 2
- Average comments per issue: 0.0
- Average comments per pull request: 0.33
- Merged pull requests: 2
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- RensDimmendaal (1)
Pull Request Authors
- nathanhubens (2)
- deven367 (1)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 1
-
Total downloads:
- pypi 50 last-month
- Total dependent packages: 0
- Total dependent repositories: 1
- Total versions: 21
- Total maintainers: 1
pypi.org: fasterai
A library to make neural networks lighter and faster with fastai
- Homepage: https://github.com/FasterAI-Labs/fasterai/tree/master/
- Documentation: https://fasterai.readthedocs.io/
- License: Apache Software License 2.0
-
Latest release: 0.2.7
published 11 months ago
Rankings
Maintainers (1)
Dependencies
- fastai/workflows/quarto-ghp master composite
- fastai/workflows/nbdev-ci master composite