pfl
Simulation framework for accelerating research in Private Federated Learning
Science Score: 44.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Committers with academic emails
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (16.9%) to scientific vocabulary
Keywords
Repository
Simulation framework for accelerating research in Private Federated Learning
Basic Info
- Host: GitHub
- Owner: apple
- License: apache-2.0
- Language: Jupyter Notebook
- Default Branch: develop
- Homepage: http://apple.github.io/pfl-research/
- Size: 3.8 MB
Statistics
- Stars: 334
- Watchers: 21
- Forks: 39
- Open Issues: 11
- Releases: 5
Topics
Metadata Files
README.md
pfl: Python framework for Private Federated Learning simulations
Documentation website: https://apple.github.io/pfl-research
pfl is a Python framework developed at Apple to empower researchers to run efficient simulations with privacy-preserving federated learning (FL) and disseminate the results of their research in FL. We are a team comprising engineering and research expertise, and we encourage researchers to publish their papers, with this code, with confidence.
The framework is not intended to be used for third-party FL deployments but the results of the simulations can be tremendously useful in actual FL deployments.
We hope that pfl will promote open research in FL and its effective dissemination.
pfl provides several useful features, including the following:
- Get started quickly trying out PFL for your use case with your existing model and data.
- Iterate quickly with fast simulations utilizing multiple levels of distributed training (multiple processes, GPUs and machines).
- Flexibility and expressiveness - when a researcher has a PFL idea to try,
pflhas flexible APIs to express these ideas. - Scalable simulations for large experiments with state-of-the-art algorithms and models.
- Support both PyTorch and TensorFlow.
- Unified benchmarks for datasets that have been vetted for both PyTorch and TensorFlow.
- Support other models in addition to neural networks, e.g. GBDTs. Switching between types of models is seamless.
- Tight integration with privacy features, including common mechanisms for local and central differential privacy.
Results from benchmarks are maintained in this Weights & Biases report.
Installation
Installation instructions can be found here.
pfl is available on PyPI and a full installation be done with pip:
pip install 'pfl[tf,pytorch,trees]'
Getting started - tutorial notebooks
To try out pfl immediately without installation, we provide several colab notebooks for learning the different components in pfl hands-on.
Introduction to Federated Learning with CIFAR10 and TensorFlow.
Introduction to PFL research with FLAIR and PyTorch.
Introduction to Differential Privacy (DP) with Federated Learning.
Creating Federated Dataset for PFL Experiment.
We also support MLX! * (Jupyter notebook) Introduction to Federated Learning with CIFAR10 and MLX.
But you have to run this notebook locally on your Apple silicon, see all Jupyter notebooks available here.
Getting started - benchmarks
pfl aims to streamline the benchmarking process of testing hypotheses in the Federated Learning paradigm. The official benchmarks are available in the benchmarks directory, using a variety of realistic dataset-model combinations with and without differential privacy (yes, we do also have CIFAR10).
Copying these examples is a great starting point for doing your own research. See the quickstart on how to start converging a model on the simplest benchmark (CIFAR10) in just a few minutes.
Contributing
Researchers are invited to contribute to the framework. Please, see here for more details.
Citing pfl-research
@article{granqvist2024pfl,
title={pfl-research: simulation framework for accelerating research in Private Federated Learning},
author={Granqvist, Filip and Song, Congzheng and Cahill, {\'A}ine and van Dalen, Rogier and Pelikan, Martin and Chan, Yi Sheng and Feng, Xiaojun and Krishnaswami, Natarajan and Jina, Vojta and Chitnis, Mona},
journal={arXiv preprint arXiv:2404.06430},
year={2024},
}
Owner
- Name: Apple
- Login: apple
- Kind: organization
- Location: Cupertino, CA
- Website: https://apple.com
- Repositories: 305
- Profile: https://github.com/apple
Citation (CITATION.cff)
cff-version: 1.2.0
title: pfl
message: >-
If you use this software, please cite it using the
metadata from this file.
type: software
authors:
- given-names: Filip
family-names: Granqvist
affiliation: Apple
- given-names: Congzheng
family-names: Song
affiliation: Apple
- given-names: Áine
family-names: Cahill
affiliation: Apple
- given-names: Rogier
family-names: van Dalen
orcid: "https://orcid.org/0000-0002-9603-5771"
- given-names: Martin
family-names: Pelikan
affiliation: Apple
- given-names: Yi Sheng
family-names: Chan
affiliation: Apple
- given-names: Xiaojun
family-names: Feng
affiliation: Apple
- given-names: Natarajan
family-names: Krishnaswami
affiliation: Apple
- given-names: Vojta
family-names: Jina
affiliation: Apple
- given-names: Mona
family-names: Chitnis
affiliation: Apple
repository-code: 'https://github.com/apple/pfl-research'
abstract: >-
pfl: simulation framework for accelerating research in Private Federated Learning
license: MIT
preferred-citation:
type: article
authors:
- given-names: Filip
family-names: Granqvist
affiliation: Apple
- given-names: Congzheng
family-names: Song
affiliation: Apple
- given-names: Áine
family-names: Cahill
affiliation: Apple
- given-names: Rogier
family-names: van Dalen
orcid: "https://orcid.org/0000-0002-9603-5771"
- given-names: Martin
family-names: Pelikan
affiliation: Apple
- given-names: Yi Sheng
family-names: Chan
affiliation: Apple
- given-names: Xiaojun
family-names: Feng
affiliation: Apple
- given-names: Natarajan
family-names: Krishnaswami
affiliation: Apple
- given-names: Vojta
family-names: Jina
affiliation: Apple
- given-names: Mona
family-names: Chitnis
affiliation: Apple
doi: "10.48550/arXiv.2404.06430"
journal: "arXiv preprint arXiv:2404.06430"
month: 4
title: "pfl-research: simulation framework for accelerating research in Private Federated Learning"
year: 2024
GitHub Events
Total
- Create event: 3
- Release event: 1
- Issues event: 3
- Watch event: 46
- Issue comment event: 12
- Push event: 14
- Pull request review event: 35
- Pull request review comment event: 27
- Pull request event: 37
- Fork event: 9
Last Year
- Create event: 3
- Release event: 1
- Issues event: 3
- Watch event: 46
- Issue comment event: 12
- Push event: 14
- Pull request review event: 35
- Pull request review comment event: 27
- Pull request event: 37
- Fork event: 9
Committers
Last synced: 9 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| fgranqvist | f****t@h****m | 55 |
| congzheng-song | 9****g | 9 |
| ac554 | 4****4 | 9 |
| Martin Pelikan | 1****e | 8 |
| Shauvik RC | s****k@g****m | 2 |
| Rogier van Dalen | r****d | 2 |
| Mona Chitnis | m****s@g****m | 2 |
| jonnyascott | 6****t | 1 |
| Luke Carlson | j****n@g****m | 1 |
| Gabriel Ayres | 9****s | 1 |
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 15
- Total pull requests: 151
- Average time to close issues: about 1 month
- Average time to close pull requests: 5 days
- Total issue authors: 5
- Total pull request authors: 13
- Average comments per issue: 0.33
- Average comments per pull request: 0.24
- Merged pull requests: 128
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 2
- Pull requests: 37
- Average time to close issues: 26 days
- Average time to close pull requests: 17 days
- Issue authors: 2
- Pull request authors: 10
- Average comments per issue: 0.5
- Average comments per pull request: 0.49
- Merged pull requests: 27
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- grananqvist (11)
- sonnguyenasu (1)
- sohaib-idalin (1)
- warisgill (1)
- jonnyascott (1)
Pull Request Authors
- grananqvist (89)
- martin-pelikan-apple (14)
- congzheng-song (13)
- ac554 (13)
- gabrielfnayres (4)
- rogiervd (3)
- madrob (3)
- jonnyascott (3)
- RobRomijnders (2)
- nkrishnaswami (2)
- jlukecarlson (2)
- shauvik (2)
- monachitnis (1)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 1
-
Total downloads:
- pypi 14,410 last-month
- Total dependent packages: 0
- Total dependent repositories: 1
- Total versions: 9
- Total maintainers: 1
pypi.org: pfl
Simulation framework for Private Federated Learning
- Homepage: https://github.com/apple/pfl-research
- Documentation: https://pfl.readthedocs.io/
- License: apache-2.0
-
Latest release: 0.4.0
published 6 months ago
Rankings
Maintainers (1)
Dependencies
- base latest build
- nvidia/cuda ${CUDA_VERSION}-cudnn${CUDNN_VERSION}-${RUNTIME_TYPE}-ubuntu22.04 build
- 102 dependencies
- mock ^5.1.0 develop
- mypy 1.5 develop
- pre-commit ^2.20.0 develop
- pytest ^7.2.0 develop
- pytest-lazy-fixture ^0.6.3 develop
- pytest-xdist ^3.3.1 develop
- ruff 0.0.290 develop
- scikit-learn ^1.0.2 develop
- yapf ^0.40.1 develop
- awscli ^1.32.29
- h5py ^3.8.0
- multiprocess ^0.70.12
- pfl --- - !ruby/hash:ActiveSupport::HashWithIndifferentAccess path: "../" extras: - tf markers: extra=='tf' develop: true - !ruby/hash:ActiveSupport::HashWithIndifferentAccess path: "../" extras: - pytorch markers: extra=='pytorch' develop: true
- pillow >=10.2.0
- python >=3.10,<3.11
- tensorflow ^2.14.0
- tensorflow_addons >=0.20.0,<1
- tensorflow_probability ^0.22
- torch --- - !ruby/hash:ActiveSupport::HashWithIndifferentAccess version: 2.0.1+cu118 source: torch_cu118 markers: sys_platform == 'linux' optional: true - !ruby/hash:ActiveSupport::HashWithIndifferentAccess version: 2.0.1 source: PyPI markers: sys_platform == 'darwin' optional: true
- torchvision --- - !ruby/hash:ActiveSupport::HashWithIndifferentAccess version: 0.15.2+cu118 source: torch_cu118 markers: sys_platform == 'linux' optional: true - !ruby/hash:ActiveSupport::HashWithIndifferentAccess version: 0.15.2 source: PyPI markers: sys_platform == 'darwin' optional: true
- tqdm ^4.63.1
- 123 dependencies
- bump-my-version ^0.9.3 develop
- deptry ^0.6.4 develop
- mypy 1.5 develop
- pre-commit ^2.20.0 develop
- pytest ^7.2.0 develop
- pytest-cov ^4.0.0 develop
- pytest-env ^1.0.1 develop
- pytest-lazy-fixture ^0.6.3 develop
- pytest-xdist ^3.3.1 develop
- ruff 0.0.290 develop
- tox ^3.25.1 develop
- yapf ^0.40.1 develop
- furo ^2023.8.19 docs
- sphinx ^7.1.2 docs
- sphinx-autodoc-typehints ^1.24.0 docs
- sphinx-last-updated-by-git ^0.3.6 docs
- cmake ^3.27.5
- dp-accounting ^0.4
- multiprocess ^0.70.15
- numpy ^1.21
- prv-accountant ^0.2.0
- python >=3.10,<3.12
- scikit-learn ^1.0.2
- scipy ^1.7.3
- tensorflow --- - !ruby/hash:ActiveSupport::HashWithIndifferentAccess version: "^2.14" markers: sys_platform == 'linux' optional: true - !ruby/hash:ActiveSupport::HashWithIndifferentAccess version: "^2.14" markers: sys_platform == 'darwin' and platform_machine == 'x86_64' optional: true
- tensorflow-macos ^2.14
- tensorflow-probability ^0.22
- torch --- - !ruby/hash:ActiveSupport::HashWithIndifferentAccess version: 2.0.1+cu118 source: torch_cu118 markers: sys_platform == 'linux' optional: true - !ruby/hash:ActiveSupport::HashWithIndifferentAccess version: 2.0.1 source: PyPI markers: sys_platform == 'darwin' optional: true
- wheel ^0.41.2
- xgboost ^1.4.2