feddistill

Code to reproduce the experiments of the ICLR25 paper "On the Byzantine-Resilience of Distillation-Based Federated Learning"

https://github.com/zib-iol/feddistill

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (8.6%) to scientific vocabulary

Keywords

byzantine deep-learning federated-learning knowledge-distillation neural-networks pytorch
Last synced: 6 months ago · JSON representation ·

Repository

Code to reproduce the experiments of the ICLR25 paper "On the Byzantine-Resilience of Distillation-Based Federated Learning"

Basic Info
Statistics
  • Stars: 3
  • Watchers: 1
  • Forks: 1
  • Open Issues: 0
  • Releases: 0
Topics
byzantine deep-learning federated-learning knowledge-distillation neural-networks pytorch
Created about 2 years ago · Last pushed 11 months ago
Metadata Files
Readme Citation

README.md

[ICLR25] On the Byzantine-Resilience of Distillation-Based Federated Learning

Authors: Christophe Roux, Max Zimmer, Sebastian Pokutta

This repository contains the code to reproduce the experiments from the ICLR25 paper "On the Byzantine-Resilience of Distillation-Based Federated Learning". The code is based on PyTorch 1.9 and the experiment-tracking platform Weights & Biases.

Structure and Usage

Structure

Experiments are started from the following file:

  • main.py: Starts experiments using the dictionary format of Weights & Biases.

The rest of the project is structured as follows:

  • byzantine: Contains the attacks and defenses used in the paper.
  • runners: Contains classes to control the training and collection of metrics.
  • models: Contains all model architectures used.
  • utilities.py: Contains useful auxiliary functions and classes.
  • config.py: Configuration for the datasets used in the experiments.
  • public_config.py: Contains the configuration for the public datasets.
  • metrics.py: Contains the metrics used in the experiments.
  • strategies.py: Contains the different strategies used, such as FedAVG and FedDistill.

Usage

Define the parameters in the main.py defaults-dictionary and run it with the --debug flag. Or, configure a sweep in Weights & Biases and run it from there (without the flag).

Citation

In case you find the paper or the implementation useful for your own research, please consider citing:

@inproceedings{roux2025on, title={On the Byzantine-Resilience of Distillation-Based Federated Learning}, author={Christophe Roux and Max Zimmer and Sebastian Pokutta}, booktitle={The Thirteenth International Conference on Learning Representations}, year={2025}, url={https://openreview.net/forum?id=of6EuHT7de} }

Owner

  • Name: IOL Lab
  • Login: ZIB-IOL
  • Kind: organization
  • Location: Germany

Working on optimization and learning at the intersection of mathematics and computer science

Citation (citation.bib)

@inproceedings{roux2025on,
title={On the Byzantine-Resilience of Distillation-Based Federated Learning},
author={Christophe Roux and Max Zimmer and Sebastian Pokutta},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025},
url={https://openreview.net/forum?id=of6EuHT7de}
}

GitHub Events

Total
  • Watch event: 3
  • Push event: 2
  • Fork event: 1
Last Year
  • Watch event: 3
  • Push event: 2
  • Fork event: 1

Issues and Pull Requests

Last synced: 10 months ago

All Time
  • Total issues: 0
  • Total pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Total issue authors: 0
  • Total pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels