fasterai

FasterAI: Prune and Distill your models with FastAI and PyTorch

https://github.com/fasterai-labs/fasterai

Science Score: 57.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 3 DOI reference(s) in README
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (12.1%) to scientific vocabulary

Keywords

compression fastai knowledge-distillation pruning pytorch
Last synced: 6 months ago · JSON representation ·

Repository

FasterAI: Prune and Distill your models with FastAI and PyTorch

Basic Info
  • Host: GitHub
  • Owner: FasterAI-Labs
  • License: apache-2.0
  • Language: Jupyter Notebook
  • Default Branch: master
  • Homepage: https://fasterai-labs.com
  • Size: 35 MB
Statistics
  • Stars: 248
  • Watchers: 5
  • Forks: 19
  • Open Issues: 3
  • Releases: 5
Topics
compression fastai knowledge-distillation pruning pytorch
Created about 5 years ago · Last pushed 8 months ago
Metadata Files
Readme Contributing License Citation

README.md

header

FeaturesInstallationTutorialsCommunityCitingLicense

Overview

fasterAI is a PyTorch-based library that makes neural networks smaller, faster, and more efficient through state-of-the-art compression techniques. The library provides simple but powerful implementations of pruning, knowledge distillation, quantization, and other network optimization methods that can be applied with just a few lines of code.

Why compress your models with fasterai?

  • Reduce model size by up to 90% with minimal accuracy loss
  • Speed up inference for deployment on edge devices
  • Lower energy consumption for more sustainable AI
  • Simplify architectures while maintaining performance
Performance Improvements

Features

1. Sparsification

Sparsification

Make your model sparse by replacing selected weights with zeros using Sparsifier or SparsifyCallback.

|Parameter|Description|Options| |---|---|---| |sparsity|Percentage of weights to zero out|0-100%| |granularity|Level at which to apply sparsity|'weight', 'vector', 'kernel', 'filter'| |context|Scope of sparsification|'local' (per layer), 'global' (whole model)| |criteria|Method to select weights|'magnitude', 'movement', 'gradient', etc.| |schedule|How sparsity evolves during training|'one_shot', 'iterative', 'gradual', etc.|

2. Pruning

Pruning

Remove zero-weight nodes from your network structure using Pruner or PruneCallback.

|Parameter|Description|Options| |---|---|---| |pruning_ratio|Percentage of weights to remove|0-100%| |context|Scope of sparsification|'local' (per layer), 'global' (whole model)| |criteria|Method to select weights|'magnitude', 'movement', 'gradient', etc.| |schedule|How sparsity evolves during training|'one_shot', 'iterative', 'gradual', etc.|

3. Knowledge Distillation

Distillation

Transfer knowledge from a large teacher to a smaller student using KnowledgeDistillationCallback.

|Parameter|Description|Options| |---|---|---| |teacher|Teacher model|Any PyTorch model| |loss|Distillation loss function|'SoftTarget', 'Logits', 'Attention', etc.| |activations_student|Student layers to match|Layer names as strings| |activations_teacher|Teacher layers to match|Layer names as strings| |weight|Balancing weight for distillation|0.0-1.0|

4. Regularization

Regularization

Push weights toward zero during training using RegularizeCallback.

|Parameter|Description|Options| |---|---|---| |criteria|Regularization criteria|Same as sparsification criteria| |granularity|Level of regularization|Same as sparsification granularity| |weight|Regularization strength|Floating point value| |schedule|How sparsity evolves during training|'oneshot', 'iterative', 'gradual', etc.| |`layertypes`|Layer types to regularize|'nn.Conv2d', 'nn.Linear', etc.|

5. Quantization

Quantization

Reduce the precision of weights and activations using Quantizer or QuantizeCallback.

| Parameter | Description | Options | | ---------------- | ----------------------------- | -------------------------- | | backend | Target backend | 'x86', 'qnnpack' | | method | Quantization method | 'static', 'dynamic', 'qat' | | use_per_tensor | Force per-tensor quantization | True/False |


Quick Start

This is how easy it is to induce Sparsification in your PyTorch model:

```python from fasterai.sparse.all import *

learn = visionlearner(dls, model) learn.fitonecycle(nepochs, cbs=SparsifyCallback(sparsity, granularity, context, criteria, schedule)) ```


Installation

pip install git+https://github.com/FasterAI-Labs/fasterai.git

or

pip install fasterai


Tutorials


Join the community

Join our discord server to meet other FasterAI users and share your projects!


Citing

@software{Hubens, author = {Nathan Hubens}, title = {fasterai}, year = 2022, publisher = {Zenodo}, version = {v0.1.6}, doi = {10.5281/zenodo.6469868}, url = {https://doi.org/10.5281/zenodo.6469868} }


License

Apache-2.0 License.

footer

Owner

  • Name: FasterAI-Labs
  • Login: FasterAI-Labs
  • Kind: organization

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
- family-names: "Hubens"
  given-names: "Nathan"
title: "fasterai"
version: 0.1.8
doi: 10.5281/zenodo.6469868
date-released: 2022-06-06
license: "Apache-2.0"
url: "https://nathanhubens.github.io/fasterai/"
repository-code: "https://github.com/nathanhubens/fasterai"
keywords:
  - machine learning
  - deep learning
  - artificial intelligence
  - pruning
  - knowledge distillation
  - compression

GitHub Events

Total
  • Create event: 3
  • Issues event: 1
  • Release event: 1
  • Watch event: 4
  • Issue comment event: 2
  • Push event: 33
  • Pull request event: 5
  • Fork event: 1
Last Year
  • Create event: 3
  • Issues event: 1
  • Release event: 1
  • Watch event: 4
  • Issue comment event: 2
  • Push event: 33
  • Pull request event: 5
  • Fork event: 1

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 1
  • Total pull requests: 3
  • Average time to close issues: N/A
  • Average time to close pull requests: about 9 hours
  • Total issue authors: 1
  • Total pull request authors: 2
  • Average comments per issue: 0.0
  • Average comments per pull request: 0.33
  • Merged pull requests: 2
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 1
  • Pull requests: 3
  • Average time to close issues: N/A
  • Average time to close pull requests: about 9 hours
  • Issue authors: 1
  • Pull request authors: 2
  • Average comments per issue: 0.0
  • Average comments per pull request: 0.33
  • Merged pull requests: 2
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • RensDimmendaal (1)
Pull Request Authors
  • nathanhubens (2)
  • deven367 (1)
Top Labels
Issue Labels
Pull Request Labels

Packages

  • Total packages: 1
  • Total downloads:
    • pypi 50 last-month
  • Total dependent packages: 0
  • Total dependent repositories: 1
  • Total versions: 21
  • Total maintainers: 1
pypi.org: fasterai

A library to make neural networks lighter and faster with fastai

  • Versions: 21
  • Dependent Packages: 0
  • Dependent Repositories: 1
  • Downloads: 50 Last month
Rankings
Stargazers count: 4.5%
Forks count: 8.9%
Dependent packages count: 10.0%
Average: 12.5%
Downloads: 17.2%
Dependent repos count: 21.8%
Maintainers (1)
Last synced: 6 months ago

Dependencies

.github/workflows/deploy.yaml actions
  • fastai/workflows/quarto-ghp master composite
.github/workflows/test.yaml actions
  • fastai/workflows/nbdev-ci master composite
docker-compose.yml docker
setup.py pypi