Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (14.4%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

Basic Info
  • Host: GitHub
  • Owner: MaromSv
  • Language: Python
  • Default Branch: main
  • Size: 1.53 MB
Statistics
  • Stars: 1
  • Watchers: 1
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created over 2 years ago · Last pushed over 1 year ago
Metadata Files
Readme Citation

README.md

Poison Playground

Table of contents

  1. About the project
  2. Getting started
  3. Poison Playground's pipeline
  4. Attack & Defense sources
  5. Adding new attacks and defenses
  6. Contributing
  7. Contact

About the project

Poison Playground is a benchmarking tool that can be used to compare how vertical and horizontal federated learning models handle different attacks and defenses. The base tool includes some poisoning attacks and defenses.
Poison Playground uses the MNIST Digits dataset using TensorFlow.

Poison Playground was created by researchers, for researchers. It is meant to help make comparing the effects of various attacks and defenses on horizontal and vertical federated learning much easier. It is intended to speed up research by providing a useful metric collection and comparison tool.

Getting started

To ensure you have all of the necessary packages, first run the requirements.txt file using the below command sh pip install -r requirements.txt Once the files are installed, you can simply run the main.py file. This will open the GUI, which you can use to orchestrate your experiments.

Poison Playground's pipeline

  1. Run main.py

  2. In the GUI, fill in the number of scenarios you want to run, then the specific attack, defence and corresponding paramaters.. Then press the "Run Simulation" button.

  3. Once the "Run Simulation" button has been pressed, the described experiments will run. The first step in running an instance is to partition the data, either horizontally or vertically. This is done in the dataPartitioning.py file.

  4. Then, the instances will be run one by one. To run an instance, the parameters inputted for the instance are passed to the simulationHorizontal.py or simulationVertical.py simulation file, depending on which federated learning model was chosen.

  5. Finally, once all of the instances are done, the GUI will display a new window with a confusion matrix for every scenario as well as a bar chart comparing between all scenarios.

Attack & Defense sources

| Attack/Defense | Name | Paper source | |---------|----------|----------| | Attack | Label flipping | Data Poisoning Attacks Against Federated Learning Systems | | Attack | Model poisoning | MPAF: Model Poisoning Attacks to Federated Learning based on Fake Clients | | Defense | Fools gold | Mitigating Sybils in Federated Learning Poisoning | | Defense | Two norm | Data Poisoning Attacks Against Federated Learning Systems |

Adding new attacks & defenses

The below three steps describe how you can add your own attack or defense to Poison Playground:

1) Create the attack's/defense's file: The first step is to create your new file in either the attacks or defenses folder. Then you need to write your attack/defense code. It is recommended to make your attack/defense executable by calling only one function, and that it obtains all needed parameters from the simulation file.

2) Add the attack/defense to the vertical simulation code and the horizontal simulation code: To execute your attack/defense, you need to add it to the simulation files. Depending on how your code works, the location of execution will vary. Some examples can be seen with our implemented attacks and defenses. The name of the attack/defence can be passed as a parameter through the GUI, ensure that the you add an if statement to check for this and execute the attack/ defence.

3) Add the attack/defense to the GUI: Finally add the name of the attack/defence to the createscenarioform() as an option for the attacks /defences dialogue. Also add it to updateattackconfig() or updatedefenceconfig().

Contributing

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

Contact

Yusef Ahmed - LinkedIn - yusefahmed0403@gmail.com

Marom Sverdlov - LinkedIn - maroms.private@gmail.com

Owner

  • Login: MaromSv
  • Kind: user

Citation (CITATION.cff)

# This CITATION.cff file was generated with cffinit.
# Visit https://bit.ly/cffinit to generate yours today!

cff-version: 1.2.0
title: Poison Playground
message: >-
  If you use this software, please cite it using the
  metadata from this file.
type: software
authors:
  - given-names: Yusef
    family-names: Ahmed
    email: y.w.ahmed@student.tue.nl
  - given-names: Marom
    family-names: Sverdlov
    email: m.sverdlov@student.tue.nl
repository-code: 'https://github.com/MaromSv/Poison-Playground'

GitHub Events

Total
Last Year