fl-bench
Benchmark of federated learning. Dedicated to the community. ๐ค
Science Score: 54.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
โCITATION.cff file
Found CITATION.cff file -
โcodemeta.json file
Found codemeta.json file -
โ.zenodo.json file
Found .zenodo.json file -
โDOI references
-
โAcademic publication links
Links to: arxiv.org, researchgate.net, ieee.org -
โAcademic email domains
-
โInstitutional organization owner
-
โJOSS paper metadata
-
โScientific vocabulary similarity
Low similarity (9.3%) to scientific vocabulary
Keywords
Repository
Benchmark of federated learning. Dedicated to the community. ๐ค
Basic Info
Statistics
- Stars: 626
- Watchers: 6
- Forks: 105
- Open Issues: 3
- Releases: 0
Topics
Metadata Files
README.md
Benchmarking Federated Learning Methods. Realizing Your Brilliant Ideas. Having Fun with Federated Learning. FL-bench welcomes PR on everything that can make this project better.
Methods ๐งฌ
- FedAvg -- Communication-Efficient Learning of Deep Networks from Decentralized Data (AISTATS'17)
- FedAvgM -- Measuring the Effects of Non-Identical Data Distribution for Federated Visual Classification (ArXiv'19)
- FedProx -- Federated Optimization in Heterogeneous Networks (MLSys'20)
- SCAFFOLD -- SCAFFOLD: Stochastic Controlled Averaging for Federated Learning (ICML'20)
- MOON -- Model-Contrastive Federated Learning (CVPR'21)
- FedDyn -- Federated Learning Based on Dynamic Regularization (ICLR'21)
- FedLC -- Federated Learning with Label Distribution Skew via Logits Calibration (ICML'22)
- FedGen -- Data-Free Knowledge Distillation for Heterogeneous Federated Learning (ICML'21)
- CCVR -- No Fear of Heterogeneity: Classifier Calibration for Federated Learning with Non-IID Data (NeurIPS'21)
- FedOpt -- Adaptive Federated Optimization (ICLR'21)
- FedADMM -- FedADMM: A robust federated deep learning framework with adaptivity to system heterogeneity (ICDE'22)
- Elastic Aggregation -- Elastic Aggregation for Federated Optimization (CVPR'23)
- FedFed -- FedFed: Feature Distillation against Data Heterogeneity in Federated Learning (NeurIPS'23)
- pFedSim (My Workโญ) -- pFedSim: Similarity-Aware Model Aggregation Towards Personalized Federated Learning (ArXiv'23)
- Local-Only -- Local training only (without communication).
- FedMD -- FedMD: Heterogenous Federated Learning via Model Distillation (NeurIPS'19)
- APFL -- Adaptive Personalized Federated Learning (ArXiv'20)
- LG-FedAvg -- Think Locally, Act Globally: Federated Learning with Local and Global Representations (ArXiv'20)
- FedBN -- FedBN: Federated Learning On Non-IID Features Via Local Batch Normalization (ICLR'21)
- FedPer -- Federated Learning with Personalization Layers (AISTATS'20)
- FedRep -- Exploiting Shared Representations for Personalized Federated Learning (ICML'21)
- Per-FedAvg -- Personalized Federated Learning with Theoretical Guarantees: A Model-Agnostic Meta-Learning Approach (NeurIPS'20)
- pFedMe -- Personalized Federated Learning with Moreau Envelopes (NeurIPS'20)
- FedEM -- Federated Multi-Task Learning under a Mixture of Distributions (NIPS'21)
- Ditto -- Ditto: Fair and Robust Federated Learning Through Personalization (ICML'21)
- pFedHN -- Personalized Federated Learning using Hypernetworks (ICML'21)
- pFedLA -- Layer-Wised Model Aggregation for Personalized Federated Learning (CVPR'22)
- CFL -- Clustered Federated Learning: Model-Agnostic Distributed Multi-Task Optimization under Privacy Constraints (ArXiv'19)
- FedFomo -- Personalized Federated Learning with First Order Model Optimization (ICLR'21)
- FedBabu -- FedBabu: Towards Enhanced Representation for Federated Image Classification (ICLR'22)
- FedAP -- Personalized Federated Learning with Adaptive Batchnorm for Healthcare (IEEE'22)
- kNN-Per -- Personalized Federated Learning through Local Memorization (ICML'22)
- MetaFed -- MetaFed: Federated Learning among Federations with Cyclic Knowledge Distillation for Personalized Healthcare (IJCAI'22)
- FedRoD -- On Bridging Generic and Personalized Federated Learning for Image Classification (ICLR'22)
- FedProto -- FedProto: Federated prototype learning across heterogeneous clients (AAAI'22)
- FedPAC -- Personalized Federated Learning with Feature Alignment and Classifier Collaboration (ICLR'23)
- FedALA -- FedALA: Adaptive Local Aggregation for Personalized Federated Learning (AAAI'23)
- PeFLL -- PeFLL: Personalized Federated Learning by Learning to Learn (ICLR'24)
- FLUTE -- Federated Representation Learning in the Under-Parameterized Regime (ICML'24)
- FedAS -- FedAS: Bridging Inconsistency in Personalized Federated Learning (CVPR'24)
- pFedFDA -- pFedFDA: Personalized Federated Learning via Feature Distribution Adaptation (NeurIPS 2024)
- Floco -- Federated Learning over Connected Modes (NeurIPS'24)
- FedAH -- FedAH: Aggregated Head for Personalized Federated Learning (ArXiv'24) <!-- -->
- FedSR -- FedSR: A Simple and Effective Domain Generalization Method for Federated Learning (NeurIPS'22)
- ADCOL -- Adversarial Collaborative Learning on Non-IID Features (ICML'23)
- FedIIR -- Out-of-Distribution Generalization of Federated Learning via Implicit Invariant Relationships (ICML'23) <!-- -->
Environment Preparation ๐งฉ
PyPI ๐
sh
pip install -r .env/requirements.txt
Poetry ๐ถ
For those China mainland users
sh
poetry install --no-root -C .env
For others
sh
cd .env && sed -i "10,14d" pyproject.toml && poetry lock --no-update && poetry install --no-root
Docker ๐ณ
sh
docker pull ghcr.io/karhoutam/fl-bench:master
An example of building container
sh
docker run -it --name fl-bench -v path/to/FL-bench:/root/FL-bench --privileged --gpus all ghcr.io/karhoutam/fl-bench:master
Easy Run ๐โโ๏ธ
ALL classes of methods are inherited from FedAvgServer and FedAvgClient. If you wanna figure out the entire workflow and detail of variable settings, go check src/server/fedavg.py and src/client/fedavg.py.
Step 1. Generate FL Dataset
Partition the MNIST according to Dir(0.1) for 100 clients
shell
python generate_data.py -d mnist -a 0.1 -cn 100
About methods of generating federated dastaset, go check data/README.md for full details.
Step 2. Run Experiment
sh
python main.py [--config-path, --config-name] [method=<METHOD_NAME> args...]
method: The algorithm's name, e.g.,method=fedavg.[!NOTE]
methodshould be identical to the.pyfile name insrc/server.--config-path: Relative path to the directory of the config file. Defaults toconfig.--config-name: Name of.yamlconfig file (w/o the.yamlextension). Defaults todefaults, which points toconfig/defaults.yaml.
Such as running FedAvg with all defaults.
sh
python main.py method=fedavg
Defaults are set in both config/defaults.yaml and src/utils/constants.py.
How To Customize FL method Arguments ๐ค
- By modifying config file.
- By explicitly setting in CLI, e.g.,
python main.py --config-name my_cfg.yaml method=fedprox fedprox.mu=0.01. - By modifying the default value in
config/defaults.yamlorget_hyperparams()insrc/server/<method>.py
[!NOTE] For the same FL method argument, the priority of argument setting is CLI > Config file > Default value.
For example, the default value of
fedprox.muis1, ```pythonsrc/server/fedprox.py
class FedProxServer(FedAvgServer):
@staticmethod def get_hyperparams(args_list=None) -> Namespace: parser = ArgumentParser() parser.add_argument("--mu", type=float, default=1.0) return parser.parse_args(args_list)
and your `.yaml` config file hasyamlconfig/your_config.yaml
... fedprox: mu: 0.01 ```
shell python main.py method=fedprox # fedprox.mu = 1 python main.py --config-name your_config method=fedprox # fedprox.mu = 0.01 python main.py --config-name your_config method=fedprox fedprox.mu=0.001 # fedprox.mu = 0.001
Monitor ๐
FL-bench supports visdom and tensorboard.
Activate
```yaml
your_config.yaml
common: ... monitor: tensorboard # options: [null, visdom, tensorboard] ```
[!NOTE] You needs to launch
visdom/tensorboardserver by yourself.
Launch visdom / tensorboard Server
visdom
- Run
python -m visdom.serveron terminal. - Go check
localhost:8097on your browser.
tensorboard
- Run
tensorboard --logdir=<your_log_dir>on terminal. - Go check
localhost:6006on your browser.
Parallel Training via Ray ๐
This feature can vastly improve your training efficiency. At the same time, this feature is user-friendly and easy to use!!!
Activate (What You ONLY Need To Do)
```yaml
your_config.yaml
mode: parallel parallel: num_workers: 2 # any positive integer that larger than 1 ... ... ```
Manually Create Ray Cluster (Optional)
A Ray cluster would be created implicitly everytime you run experiment in parallel mode.
[!TIP] You can create it manually by the command shown below to avoid creating and destroying cluster every time you run experiment.
shell ray start --head [OPTIONS][!NOTE] You need to keep
num_cpus: nullandnum_gpus: nullin your config file for connecting to a existingRaycluster.```yaml
yourconfigfile.yaml
Connect to an existing Ray cluster in localhost.
mode: parallel parallel: ... numgpus: null numcpus: null ... ```
Arguments ๐ง
FL-bench highly recommend through config file to customize your FL method and experiment settings.
FL-bench offers a default config file config/defaults.yaml that contains all required arguments and corresponding comments.
All common arguments have their default value. Go check config/defaults.yaml or DEFAULTS in src/utils/constants.py for all argument defaults.
[!NOTE] If your custom config file does not contain all required arguments, FL-bench will fill those missing arguments with their defaults that loaded from
DEFAULTS.
About the default values of specific FL method arguments, go check corresponding src/server/<method>.py for the full details.
[!TIP] FL-bench also supports CLI arguments for quick changings. Here are some examples: ```
Using config/defaults.yaml but change the method to FedProx and set its mu to 0.1.
python main.py method=fedprox fedprox.mu=0.1
Change learning rate to 0.1.
python main.py optimizer.lr=0.1
Change batch size to 128.
python main.py common.batch_size=128 ```
Models ๐ค
This benchmark supports bunch of models that common and integrated in Torchvision (check here for all):
- ResNet family
- EfficientNet family
- DenseNet family
- MobileNet family
- LeNet5
- ...
[!TIP] You can define your own custom model by filling the
CustomModelclass insrc/utils/models.pyand use it by definingmodel: customin your.yamlconfig file.
Datasets and Partition Strategies ๐จ
Regular Image Datasets
MNIST (1 x 28 x 28, 10 classes)
CIFAR-10/100 (3 x 32 x 32, 10/100 classes)
EMNIST (1 x 28 x 28, 62 classes)
FashionMNIST (1 x 28 x 28, 10 classes)
FEMNIST (1 x 28 x 28, 62 classes)
CelebA (3 x 218 x 178, 2 classes)
SVHN (3 x 32 x 32, 10 classes)
USPS (1 x 16 x 16, 10 classes)
Tiny-ImageNet-200 (3 x 64 x 64, 200 classes)
CINIC-10 (3 x 32 x 32, 10 classes)
Domain Generalization Image Datasets
- DomainNet (3 x ? x ?, 345 classes)
- Go check
data/README.mdfor the full process guideline ๐งพ.
- Go check
Medical Image Datasets
COVID-19 (3 x 244 x 224, 4 classes)
Organ-S/A/CMNIST (1 x 28 x 28, 11 classes)
Customization Tips ๐ก
Implementing FL Method
The package() at server-side class is used for assembling all parameters server need to send to clients. Similarly, package() at client-side class is for parameters clients need to send back to server. You should always has super().package() in your override implementation.
Consider to inherit your method classes from
FedAvgServerandFedAvgClientfor maximum utilizing FL-bench's workflow.You can also inherit your method classes from advanced methods, e.g., FedBN, FedProx, etc. Which will inherit all functions, variables and hyperparamter settings. If you do that, you need to careful design your method in order to avoid potential hyperparamters and workflow conflicts. ```python class YourServer(FedBNServer): ...
class YourClient(FedBNClient): ... ```
For customizing your server-side process, consider to override the
package()andaggregate_client_updates().For customizing your client-side training, consider to override the
fit(),set_parameters()andpackage().
You can find all details in FedAvgClient and FedAvgServer, which are the bases of all implementations in FL-bench.
Integrating Dataset
- Inherit your own dataset class from
BaseDatasetindata/utils/datasets.pyand add your class in dictDATASETS. Highly recommend to refer to the existing dataset classes for guidance.
Customizing Model
- I offer the
CustomModelclass insrc/utils/models.pyand you just need to define your model arch. - If you want to use your customized model within FL-bench's workflow, the
baseandclassifiermust be defined. (Tips: You can define one of them astorch.nn.Identity()for bypassing it.)
Citation ๐ง
bibtex
@software{Tan_FL-bench,
author = {Tan, Jiahao and Wang, Xinpeng},
license = {GPL-3.0},
title = {{FL-bench: A federated learning benchmark for solving image classification tasks}},
url = {https://github.com/KarhouTam/FL-bench}
}
```bibtex @misc{tan2023pfedsim, title={pFedSim: Similarity-Aware Model Aggregation Towards Personalized Federated Learning}, author={Jiahao Tan and Yipeng Zhou and Gang Liu and Jessie Hui Wang and Shui Yu}, year={2023}, eprint={2305.15706}, archivePrefix={arXiv}, primaryClass={cs.LG} }
```
Owner
- Name: Jiahao Tan
- Login: KarhouTam
- Kind: user
- Location: Shenzhen, China
- Company: Shenzhen University
- Repositories: 3
- Profile: https://github.com/KarhouTam
Persuing Master degree...
Citation (CITATION.cff)
cff-version: 1.2.0
title: 'FL-bench: A federated learning benchmark for solving image classification tasks'
message: >-
If you use this software, please cite it using the
metadata from this file.
type: software
authors:
- given-names: Jiahao
family-names: Tan
email: karhoutam@qq.com
affiliation: Shenzhen University
- given-names: Xinpeng
family-names: Wang
affiliation: 'The Chinese University of Hong Kong, Shenzhen'
email: 223015056@link.cuhk.edu.cn
repository-code: 'https://github.com/KarhouTam/FL-bench'
abstract: >-
Benchmark of federated learning that aim solving image
classification tasks.
keywords:
- federated learning
license: GNU General Public License v3.0
GitHub Events
Total
- Issues event: 66
- Watch event: 139
- Delete event: 24
- Issue comment event: 206
- Push event: 127
- Pull request event: 78
- Pull request review event: 32
- Pull request review comment event: 22
- Fork event: 37
- Create event: 21
Last Year
- Issues event: 66
- Watch event: 139
- Delete event: 24
- Issue comment event: 206
- Push event: 127
- Pull request event: 78
- Pull request review event: 32
- Pull request review comment event: 22
- Fork event: 37
- Create event: 21
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 22
- Total pull requests: 19
- Average time to close issues: 11 days
- Average time to close pull requests: 9 days
- Total issue authors: 15
- Total pull request authors: 4
- Average comments per issue: 2.45
- Average comments per pull request: 0.32
- Merged pull requests: 13
- Bot issues: 0
- Bot pull requests: 11
Past Year
- Issues: 22
- Pull requests: 19
- Average time to close issues: 11 days
- Average time to close pull requests: 9 days
- Issue authors: 15
- Pull request authors: 4
- Average comments per issue: 2.45
- Average comments per pull request: 0.32
- Merged pull requests: 13
- Bot issues: 0
- Bot pull requests: 11
Top Authors
Issue Authors
- Halfway-Li (4)
- hmaster6 (4)
- yizhixiaofeiyang (4)
- shuguang99 (3)
- qfkk (3)
- wittenator (2)
- Arnou1 (2)
- SongCherish (2)
- tayyyab555 (2)
- aasaid0168 (2)
- minmincute912 (2)
- uglyghost (2)
- llsteven (2)
- YXTong17 (2)
- Masterchef2000 (2)
Pull Request Authors
- dependabot[bot] (30)
- KarhouTam (20)
- wittenator (4)
- birnbaum (2)
- KevinHaoo (2)
- zsl503 (1)
- yizhixiaofeiyang (1)
- dennis-grinwald (1)
- neko941 (1)
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- Pillow *
- numpy *
- pandas *
- path *
- rich *
- scipy *
- torch *
- torchvision *
- tqdm *
- visdom *
- ${IMAGE_SOURCE} latest build
- certifi 2023.5.7
- charset-normalizer 3.1.0
- contourpy 1.0.7
- cycler 0.11.0
- faiss-cpu 1.7.4
- fonttools 4.40.0
- idna 3.4
- joblib 1.2.0
- jsonpatch 1.32
- jsonpointer 2.3
- kiwisolver 1.4.4
- markdown-it-py 2.2.0
- matplotlib 3.7.1
- mdurl 0.1.2
- networkx 3.1
- numpy 1.24.3
- nvidia-cublas-cu11 11.10.3.66
- nvidia-cuda-nvrtc-cu11 11.7.99
- nvidia-cuda-runtime-cu11 11.7.99
- nvidia-cudnn-cu11 8.5.0.96
- packaging 23.1
- pandas 2.0.1
- pillow 9.4.0
- pygments 2.15.1
- pynvml 11.5.0
- pyparsing 3.0.9
- python-dateutil 2.8.2
- pytz 2023.3
- requests 2.31.0
- rich 13.3.5
- scikit-learn 1.2.2
- scipy 1.10.1
- setuptools 67.8.0
- six 1.16.0
- threadpoolctl 3.1.0
- torch 1.13.1
- torchaudio 0.13.1
- torchvision 0.14.1
- tornado 6.3.2
- typing-extensions 4.6.3
- tzdata 2023.3
- urllib3 2.0.3
- visdom 0.2.4
- websocket-client 1.5.3
- wheel 0.40.0
- Pillow 9.4.0
- faiss-cpu 1.7.4
- matplotlib 3.7.1
- numpy 1.24.3
- pandas 2.0.1
- pynvml 11.5.0
- python >=3.10, <3.12
- rich 13.3.5
- scikit-learn 1.2.2
- scipy 1.10.1
- torch 1.13.1
- torchaudio 0.13.1
- torchvision 0.14.1
- visdom 0.2.4
- actions/checkout v3 composite
- docker/build-push-action 0565240e2d4ab88bba5387d719585280857ece09 composite
- docker/login-action 343f7c4344506bcbf9b4de18042ae17996df046d composite
- docker/metadata-action 96383f45573cb7f253c731d3b3ab81c87ef81934 composite
- docker/setup-buildx-action f95db51fddba0c2d1ec667646a06c2ce06100226 composite
- build ==1.0.3
- cachecontrol ==0.13.1
- certifi ==2023.11.17
- cffi ==1.16.0
- charset-normalizer ==3.3.2
- cleo ==2.1.0
- contourpy ==1.2.0
- crashtest ==0.4.1
- cryptography ==41.0.7
- cycler ==0.12.1
- distlib ==0.3.8
- dulwich ==0.21.7
- faiss-cpu ==1.7.4
- fastjsonschema ==2.19.1
- filelock ==3.13.1
- fonttools ==4.47.2
- fsspec ==2023.12.2
- idna ==3.6
- importlib-metadata ==7.0.1
- installer ==0.7.0
- jaraco-classes ==3.3.0
- jeepney ==0.8.0
- jinja2 ==3.1.3
- joblib ==1.3.2
- jsonpatch ==1.33
- jsonpointer ==2.4
- keyring ==24.3.0
- kiwisolver ==1.4.5
- markdown-it-py ==3.0.0
- markupsafe ==2.1.3
- matplotlib ==3.8.2
- mdurl ==0.1.2
- more-itertools ==10.2.0
- mpmath ==1.3.0
- msgpack ==1.0.7
- networkx ==3.2.1
- numpy ==1.26.3
- nvidia-cublas-cu12 ==12.1.3.1
- nvidia-cuda-cupti-cu12 ==12.1.105
- nvidia-cuda-nvrtc-cu12 ==12.1.105
- nvidia-cuda-runtime-cu12 ==12.1.105
- nvidia-cudnn-cu12 ==8.9.2.26
- nvidia-cufft-cu12 ==11.0.2.54
- nvidia-curand-cu12 ==10.3.2.106
- nvidia-cusolver-cu12 ==11.4.5.107
- nvidia-cusparse-cu12 ==12.1.0.106
- nvidia-nccl-cu12 ==2.18.1
- nvidia-nvjitlink-cu12 ==12.3.101
- nvidia-nvtx-cu12 ==12.1.105
- packaging ==23.2
- pandas ==2.1.4
- pexpect ==4.9.0
- pillow ==10.2.0
- pkginfo ==1.9.6
- platformdirs ==3.11.0
- poetry ==1.7.1
- poetry-core ==1.8.1
- poetry-plugin-export ==1.6.0
- ptyprocess ==0.7.0
- pycparser ==2.21
- pygments ==2.17.2
- pynvml ==11.5.0
- pyparsing ==3.1.1
- pyproject-hooks ==1.0.0
- python-dateutil ==2.8.2
- pytz ==2023.3.post1
- pyyaml ==6.0.1
- rapidfuzz ==3.6.1
- requests ==2.31.0
- requests-toolbelt ==1.0.0
- rich ==13.7.0
- scikit-learn ==1.4.0
- scipy ==1.11.4
- secretstorage ==3.3.3
- shellingham ==1.5.4
- six ==1.16.0
- sympy ==1.12
- threadpoolctl ==3.2.0
- tomlkit ==0.12.3
- torch ==2.1.2
- torchaudio ==2.1.2
- torchvision ==0.16.2
- tornado ==6.4
- triton ==2.1.0
- trove-classifiers ==2024.1.8
- typing-extensions ==4.9.0
- tzdata ==2023.4
- urllib3 ==2.1.0
- virtualenv ==20.25.0
- visdom ==0.2.4
- websocket-client ==1.7.0
- zipp ==3.17.0