diffusion-dfl-implementation

My Bachelor´s Degree Final Project

https://github.com/shiroi-max/diffusion-dfl-implementation

Science Score: 44.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (11.2%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

My Bachelor´s Degree Final Project

Basic Info
  • Host: GitHub
  • Owner: Shiroi-Max
  • License: mit
  • Language: Python
  • Default Branch: main
  • Size: 4.43 MB
Statistics
  • Stars: 0
  • Watchers: 1
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created over 1 year ago · Last pushed 9 months ago
Metadata Files
Readme License Citation

README.md

license python powered by

Download TFG

Diffusion Models in Decentralized Federated Learning

This repository contains a modular simulation environment for training and evaluating Denoising Diffusion Probabilistic Models (DDPMs) in Decentralized Federated Learning (DFL) scenarios. The system was developed as part of a Bachelors's Thesis project focused on generative AI and distributed training.

🧠 Project Overview

Title: Implementation of Generative Diffusion Models in Decentralized Federated Learning
Author: Maxim Utica Babyak
Degree: Bachelor's in Computer Engineering
University: Universidad de Murcia – Facultad de Informática
Date: January 2025
Language: Spanish

This project explores how diffusion models can enhance the performance of decentralized federated learning systems, improving convergence, privacy, and robustness under non-IID data distributions.

You can read the full thesis here:
📘 TFGUticaMaxim.pdf

📌 Key Features

  • ✅ Simulation of decentralized topologies (e.g., ring, custom)
  • 🧩 Modular codebase with YAML-configurable experiments
  • ⟳ Decentralized training loop with model aggregation
  • 🧠 Conditional DDPM generation with U-Net backbone
  • 🔒 Label-sharing strategy for non-IID mitigation
  • 📊 Built-in evaluation pipeline using auxiliary classifiers

📂 Project Structure

diffusion-dfl-implementation/ ├── configs/ │ ├── ring/ # Training & testing configs for ring topology │ └── custom/ # Training & testing configs for custom topology │ ├── src/ │ ├── core/ │ │ ├── cli.py # CLI argument parsing │ │ ├── config.py # Config dataclasses and YAML loaders │ │ ├── training.py # Training logic │ │ ├── testing.py # Evaluation logic │ │ ├── pipeline.py # DDPM sampling pipeline │ │ ├── launch.py # Node orchestration and data loading │ │ └── filesystem.py # Utility functions for I/O, timers, etc. │ └── data/ │ ├── classifier.py # CNN classifier for evaluation │ └── filtered_dataset.py # Dataset wrapper with label/threshold filtering │ ├── laboratory/ # Workspace for runtime data │ ├── datasets/ # Raw data downloaded via torchvision │ ├── classifiers/ # Saved classifier weights │ ├── topologies/ # neighbours.yaml and labels-*.yaml per topology │ ├── scenarios/ # Training outputs (per run) │ └── evaluations/ # Evaluation outputs (per model) │ ├── run.py # Script to run training or testing ├── pyproject.toml # 📦 Dependency and build configuration ├── LICENSE # License (MIT) ├── README.md # This file └── .gitignore # Git ignore rules

📦 Requirements

  • Python 3.12
  • A CUDA 12.1-compatible NVIDIA GPU (e.g., RTX 30xx or 40xx series)
  • PyTorch with GPU support (cu121)

📥 Installation

This project uses PEP 621 with pyproject.toml.

  1. Install PyTorch with CUDA 12.1 support:

bash pip install torch torchvision torchmetrics -i https://download.pytorch.org/whl/cu121

  1. Install the project dependencies:

bash pip install .

▶️ Running Experiments

All experiments are executed via the entrypoint.py module. Specify mode and configuration YAML:

🔹 Training

bash python run.py train mnist ring python run.py test emnist ring --split letters

🔹 Evaluation

bash python -m src.entrypoint test --config configs/custom/test_emnist_letters.yaml

📈 Results

Empirical results demonstrate strong convergence and generalization under decentralized training:

| Dataset | Accuracy (Best) | | -------------- | --------------- | | MNIST | 98.6% | | FashionMNIST | 91.79% | | EMNIST Letters | 90.49% |

These results were achieved on 10-node topologies with label-sharing enabled.

📚 Academic Reference

If you use this project in academic work, please cite this repository. A BibTeX entry will be available upon request.

🧪 Future Work

  • Secure aggregation protocols
  • Asynchronous and hierarchical topologies
  • Integration of differential privacy
  • Real-time federated learning agents

🧾 License

This project is licensed under the MIT License.

Owner

  • Login: Shiroi-Max
  • Kind: user

Citation (CITATION.cff)

# This CITATION.cff file was generated with cffinit.
# Visit https://bit.ly/cffinit to generate yours today!

cff-version: 1.2.0
title: Diffusion DFL Implementation
message: >-
  If you use this software, please cite it using the
  metadata from this file.
type: software
authors:
  - given-names: Maxim Utica Babyak
    email: maxim.ub.work@gmail.com
repository-code: 'https://github.com/Shiroi-Max/diffusion-dfl-implementation'
abstract: Implementation for a diffuser in a DFL scenario.
keywords:
  - Diffuser
  - AI
  - Artificial Intelligence
  - GenAI
  - Generative AI
  - DFL
  - Decentralized Federated Learning
license: MIT
commit: a9bfa35856664f4fed61d2745fe325a21f9e75b0
version: 1.0.0
date-released: '2025-01-23'

GitHub Events

Total
  • Push event: 11
  • Create event: 2
Last Year
  • Push event: 11
  • Create event: 2