finetuning-scheduler
A PyTorch Lightning extension that accelerates and enhances foundation model experimentation with flexible fine-tuning schedules.
Science Score: 59.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 1 DOI reference(s) in README -
✓Academic publication links
Links to: zenodo.org -
✓Committers with academic emails
1 of 4 committers (25.0%) from academic institutions -
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (15.3%) to scientific vocabulary
Keywords
Repository
A PyTorch Lightning extension that accelerates and enhances foundation model experimentation with flexible fine-tuning schedules.
Basic Info
- Host: GitHub
- Owner: speediedan
- License: apache-2.0
- Language: Python
- Default Branch: main
- Homepage: https://finetuning-scheduler.readthedocs.io
- Size: 2.66 MB
Statistics
- Stars: 66
- Watchers: 4
- Forks: 6
- Open Issues: 0
- Releases: 43
Topics
Metadata Files
README.md
**A PyTorch Lightning extension that enhances model experimentation with flexible fine-tuning schedules.**
______________________________________________________________________
[](https://pypi.org/project/finetuning-scheduler/)
[](https://badge.fury.io/py/finetuning-scheduler)\
[](https://codecov.io/gh/speediedan/finetuning-scheduler)
[](https://finetuning-scheduler.readthedocs.io/en/stable/)
[](https://zenodo.org/badge/latestdoi/455666112)
[](https://github.com/speediedan/finetuning-scheduler/blob/master/LICENSE)

FinetuningScheduler is simple to use yet powerful, offering a number of features that facilitate model research and exploration:
- easy specification of flexible fine-tuning schedules with explicit or regex-based parameter selection
- implicit schedules for initial/naive model exploration
- explicit schedules for performance tuning, fine-grained behavioral experimentation and computational efficiency
- automatic restoration of best per-phase checkpoints driven by iterative application of early-stopping criteria to each fine-tuning phase
- composition of early-stopping and manually-set epoch-driven fine-tuning phase transitions
Setup
Step 0: Install from PyPI
bash
pip install finetuning-scheduler
Additional installation options
#### *Install Optional Packages* #### To install additional packages required for examples: ```bash pip install finetuning-scheduler['examples'] ``` #### or to include packages for examples, development and testing: ```bash pip install finetuning-scheduler['all'] ``` #### *Source Installation Examples* #### To install from (editable) source (includes docs as well): ```bash # FTS pins Lightning to a specific commit for CI and development # This is similar to PyTorch's approach with Triton. export USE_CI_COMMIT_PIN="1" git clone https://github.com/speediedan/finetuning-scheduler.git cd finetuning-scheduler python -m pip install -e ".[all]" -r requirements/docs.txt ``` #### Install a specific FTS version from source using the standalone `pytorch-lighting` package: ```bash export FTS_VERSION=2.6.0 export PACKAGE_NAME=pytorch git clone -b v${FTS_VERSION} https://github.com/speediedan/finetuning-scheduler cd finetuning-scheduler python -m pip install -e ".[all]" -r requirements/docs.txt ``` #### *Latest Docker Image* Note, publishing of new `finetuning-scheduler` version-specific docker images was paused after the `2.0.2` patch release. If new version-specific images are required, please raise an issue. Step 1: Import the FinetuningScheduler callback and start fine-tuning!
```python import lightning as L from finetuning_scheduler import FinetuningScheduler
trainer = L.Trainer(callbacks=[FinetuningScheduler()]) ```
Get started by following the Fine-Tuning Scheduler introduction which includes a CLI-based example or by following the notebook-based Fine-Tuning Scheduler tutorial.
Installation Using the Standalone pytorch-lightning Package
applicable to versions >= 2.0.0
Now that the core Lightning package is lightning rather than pytorch-lightning, Fine-Tuning Scheduler (FTS) by default depends upon the lightning package rather than the standalone pytorch-lightning. If you would like to continue to use FTS with the standalone pytorch-lightning package instead, you can still do so as follows:
Install a given FTS release (for example v2.0.0) using standalone pytorch-lightning:
bash
export FTS_VERSION=2.0.0
export PACKAGE_NAME=pytorch
wget https://github.com/speediedan/finetuning-scheduler/releases/download/v${FTS_VERSION}/finetuning-scheduler-${FTS_VERSION}.tar.gz
pip install finetuning-scheduler-${FTS_VERSION}.tar.gz
Dynamic Versioning
FTS (as of version 2.6.0) now enables dynamic versioning both at installation time and via CLI post-installation. Initially, the dynamic versioning system allows toggling between Lightning unified and standalone imports. The two conversion operations are individually idempotent and mutually reversible.
Toggling Between Unified and Standalone Lightning Imports
FTS provides a simple CLI tool to easily toggle between unified and standalone import installation versions post-installation:
```bash
Toggle from unified to standalone Lightning imports
toggle-lightning-mode --mode standalone
Toggle from standalone to unified Lightning imports (default)
toggle-lightning-mode --mode unified ```
Note: If you have the standalone package (
pytorch-lightning) installed but not the unified package (lightning), toggling to unified mode will be prevented. You must install thelightningpackage first before toggling.
This can be useful when:
- You need to adapt existing code to work with a different Lightning package
- You're switching between projects using different Lightning import styles
- You want to test compatibility with both import styles
Examples
Scheduled Fine-Tuning For SuperGLUE
- Notebook-based Tutorial
- CLI-based Tutorial
- FSDP Scheduled Fine-Tuning
- LR Scheduler Reinitialization (advanced)
- Optimizer Reinitialization (advanced)
Continuous Integration
Fine-Tuning Scheduler is rigorously tested across multiple CPUs, GPUs and against major Python and PyTorch versions. Each Fine-Tuning Scheduler minor release (major.minor.patch) is paired with a Lightning minor release (e.g. Fine-Tuning Scheduler 2.0 depends upon Lightning 2.0).
To ensure maximum stability, the latest Lightning patch release fully tested with Fine-Tuning Scheduler is set as a maximum dependency in Fine-Tuning Scheduler's requirements.txt (e.g. <= 1.7.1). If you'd like to test a specific Lightning patch version greater than that currently in Fine-Tuning Scheduler's requirements.txt, it will likely work but you should install Fine-Tuning Scheduler from source and update the requirements.txt as desired.
Current build statuses for Fine-Tuning Scheduler
| System / (PyTorch/Python ver) | 2.3.1/3.9 | 2.9.0/3.9, 2.9.0/3.12 | | :---------------------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | | Linux \[GPUs\*\*\] | - | [](https://dev.azure.com/speediedan/finetuning-scheduler/_build/latest?definitionId=1&branchName=main) | | Linux (Ubuntu 22.04) | [](https://github.com/speediedan/finetuning-scheduler/actions/workflows/ci_test-full.yml) | [](https://github.com/speediedan/finetuning-scheduler/actions/workflows/ci_test-full.yml) | | OSX (14) | [](https://github.com/speediedan/finetuning-scheduler/actions/workflows/ci_test-full.yml) | [](https://github.com/speediedan/finetuning-scheduler/actions/workflows/ci_test-full.yml) | | Windows (2022) | [](https://github.com/speediedan/finetuning-scheduler/actions/workflows/ci_test-full.yml) | [](https://github.com/speediedan/finetuning-scheduler/actions/workflows/ci_test-full.yml) | - \*\* tests run on one RTX 4090 and one RTX 2070Community
Fine-Tuning Scheduler is developed and maintained by the community in close communication with the Lightning team. Thanks to everyone in the community for their tireless effort building and improving the immensely useful core Lightning project.
PR's welcome! Please see the contributing guidelines (which are essentially the same as Lightning's).
Citing Fine-Tuning Scheduler
Please cite:
tex
@misc{Dan_Dale_2022_6463952,
author = {Dan Dale},
title = {{Fine-Tuning Scheduler}},
month = Feb,
year = 2022,
doi = {10.5281/zenodo.6463952},
publisher = {Zenodo},
url = {https://zenodo.org/record/6463952}
}
Feel free to star the repo as well if you find it useful or interesting. Thanks 😊!
Owner
- Name: Dan Dale
- Login: speediedan
- Kind: user
- Repositories: 5
- Profile: https://github.com/speediedan
GitHub Events
Total
- Create event: 6
- Issues event: 4
- Release event: 5
- Watch event: 6
- Delete event: 2
- Issue comment event: 4
- Push event: 38
- Pull request event: 3
- Fork event: 2
Last Year
- Create event: 6
- Issues event: 4
- Release event: 5
- Watch event: 6
- Delete event: 2
- Issue comment event: 4
- Push event: 38
- Pull request event: 3
- Fork event: 2
Committers
Last synced: 9 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| Daniel Dale | d****e@g****m | 481 |
| Olaf Lipinski | o****i@s****k | 6 |
| Levente Szabados | s****i@g****m | 3 |
| Ihar Hrachyshka | i****a@g****m | 1 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 13
- Total pull requests: 4
- Average time to close issues: 2 months
- Average time to close pull requests: 7 days
- Total issue authors: 11
- Total pull request authors: 4
- Average comments per issue: 2.38
- Average comments per pull request: 1.5
- Merged pull requests: 4
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 2
- Pull requests: 2
- Average time to close issues: 12 days
- Average time to close pull requests: 12 days
- Issue authors: 2
- Pull request authors: 2
- Average comments per issue: 1.5
- Average comments per pull request: 2.5
- Merged pull requests: 2
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- quancs (2)
- JohannesK14 (2)
- Davidham3 (1)
- CyprienRicque (1)
- josedvq (1)
- olipinski (1)
- funnym0nk3y (1)
- jnyjxn (1)
- ZeguanXiao (1)
- GaetanLepage (1)
- samgelman (1)
Pull Request Authors
- booxter (2)
- olipinski (2)
- solalatus (1)
- speediedan (1)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 3
-
Total downloads:
- pypi 3,894 last-month
-
Total dependent packages: 0
(may contain duplicates) -
Total dependent repositories: 0
(may contain duplicates) - Total versions: 95
- Total maintainers: 1
proxy.golang.org: github.com/speediedan/finetuning-scheduler
- Documentation: https://pkg.go.dev/github.com/speediedan/finetuning-scheduler#section-documentation
- License: apache-2.0
-
Latest release: v2.5.3+incompatible
published 6 months ago
Rankings
pypi.org: finetuning-scheduler
A PyTorch Lightning extension that enhances model experimentation with flexible fine-tuning schedules.
- Homepage: https://github.com/speediedan/finetuning-scheduler
- Documentation: https://finetuning-scheduler.readthedocs.io/en/latest/
- License: Apache-2.0
-
Latest release: 2.5.3
published 6 months ago
Rankings
Maintainers (1)
conda-forge.org: finetuning-scheduler
The FinetuningScheduler callback accelerates and enhances foundational model experimentation with flexible fine-tuning schedules. Training with the FinetuningScheduler callback is simple and confers a host of benefits: - it dramatically increases fine-tuning flexibility - expedites and facilitates exploration of model tuning dynamics - enables marginal performance improvements of finetuned models Fundamentally, the FinetuningScheduler callback enables multi-phase, scheduled fine-tuning of foundational models. Gradual unfreezing (i.e. thawing) can help maximize foundational model knowledge retention while allowing (typically upper layers of) the model to optimally adapt to new tasks during transfer learning. FinetuningScheduler orchestrates the gradual unfreezing of models via a fine-tuning schedule that is either implicitly generated (the default) or explicitly provided by the user (more computationally efficient). Fine-tuning phase transitions are driven by FTSEarlyStopping criteria (a multi-phase extension of EarlyStopping), user-specified epoch transitions or a composition of the two (the default mode). A FinetuningScheduler training session completes when the final phase of the schedule has its stopping criteria met. Documentation ------------- - https://finetuning-scheduler.readthedocs.io/en/stable/ - https://finetuning-scheduler.readthedocs.io/en/latest/
- Homepage: https://github.com/speediedan/finetuning-scheduler
- License: Apache-2.0
-
Latest release: 0.2.3
published over 3 years ago
Rankings
Dependencies
- docutils >=0.16
- jinja2 >=3.0.0,<3.1.0
- myst-parser >=0.15,<0.17
- nbsphinx >=0.8.5
- pandoc >=1.0
- pt_lightning_sphinx_theme 057f4c3e669948bc618eec1688b016f07140cc0d
- sphinx >=4.0,<5.0
- sphinx-autodoc-typehints >=1.11,<1.15
- sphinx-copybutton >=0.3
- sphinx-paramlinks >=0.5.1
- sphinx-togglebutton >=0.2
- sphinxcontrib-fulltoc >=1.0
- sphinxcontrib-mockautodoc *
- typing-extensions *
- lightning c3299d2c595d764707d31da1061611a73d4301f7
- torch >=1.9.
- hydra-core >=1.1.0
- jsonargparse >=4.9.0
- omegaconf >=2.1.0
- datasets *
- scikit-learn *
- sentencepiece *
- transformers >=4.18.0
- fairscale >=0.4.5
- rich >=10.2.2,
- ipython *
- jupytext >=1.10
- nbval >=0.9.6
- codecov >=2.1 test
- coverage >=6.4 test
- flake8 >=3.9.2 test
- mypy >=0.920 test
- pre-commit >=1.0 test
- pytest >=6.0 test
- pytest-rerunfailures >=10.2 test
- twine ==3.2 test
- actions/checkout v3 composite
- actions/cache v2 composite
- actions/checkout v3 composite
- actions/setup-python v4 composite
- actions/upload-artifact v3 composite
- codecov/codecov-action v3 composite
- actions/checkout v3 composite
- actions/setup-python v4 composite
- actions/checkout v3 composite
- docker/build-push-action v2 composite
- docker/login-action v1 composite
- AButler/upload-release-assets v2.0 composite
- actions/checkout v3 composite
- actions/download-artifact v3 composite
- actions/setup-python v4 composite
- actions/upload-artifact v2 composite
- juliangruber/sleep-action v1 composite
- pypa/gh-action-pypi-publish v1.5.0 composite
- nvidia/cuda ${CUDA_VERSION}-devel-${OS_VER} build
- speediedan/finetuning-scheduler base-${CUST_BASE}py${PYTHON_VERSION}-pt${PYTORCH_VERSION}-pl${LIGHTNING_VERSION} build
- speediedan/finetuning-scheduler base-${CUST_BASE}py${PYTHON_VERSION}-pt${PYTORCH_VERSION}-pl${LIGHTNING_VERSION} build
- nvidia/cuda ${CUDA_VERSION}-devel-ubuntu20.04 build