marich
Marich is a model-agnostic extraction algorithm. It uses a public data to query a private model, aggregates the predicted labels, and construct a distributionall equivalent/max-information leaking extracted model.
Science Score: 28.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
○codemeta.json file
-
○.zenodo.json file
-
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (8.9%) to scientific vocabulary
Keywords
Repository
Marich is a model-agnostic extraction algorithm. It uses a public data to query a private model, aggregates the predicted labels, and construct a distributionall equivalent/max-information leaking extracted model.
Basic Info
Statistics
- Stars: 1
- Watchers: 2
- Forks: 0
- Open Issues: 0
- Releases: 0
Topics
Metadata Files
README.md
Marich: A Query-efficient Distributionally Equivalent Model Extraction Attack using Public Data
Marich aims to extract models using public data with three motives: 1. Distributional Equivalence of Extracted Prediction Distribution 2. Max-Information Extraction of the Target Model 3. Query Efficiency
To achieve these goals, Marich uses an active learning algorithm to query and extract the target models $(fT)$. We assume that only the labels (not the probabilities) are available from the target models. The extracted models $(fE)$ are trained on the selected $x$'s and $\hat{y}$'s obtained from the target models.
The attack framework is as given below:

Resources
Paper: https://arxiv.org/abs/2302.08466
Talk at PPAI Workshop at AAAI, 2023: Slides
Summary of Results
Accuracy of Extracted Model
The accuracies of competing active learning methods are shown along with Marich to present a comparison:
The accuracy curves shown above are respectively for: 1. Logistic regression model trained on MNIST dataset extracted using another Logistic regression model with EMNIST queries. 2. Logistic regression model trained on MNIST dataset extracted using another Logistic regression model with CIFAR10 queries. <!--3. CNN trained on MNIST dataset extracted using another CNN with EMNIST queries.--> 3. BERT trained on BBC News dataset extracted using another BERT with AG News queries. 4. ResNet trained on CIFAR10 dataset extracted using a CNN with ImageNet queries.
Distributional Equivalence of Prediction Distributions
Next we present the kl divergence between the outputs of the extracted models and the target models to compare the distributional equivalence of the models extracted by different algorithms. This is done on a separate subset of the training domain data.
Membership Inference with Extracted Models
The order of the extraction set ups are same as mentioned for the accuracies. The table below shows a portion of the results obtained during our experiments:

How to Run Marich?
There are 4 folders:
bert_al: Contains K-Center, Least Confidence, Margin Sampling, Entropy Sampling and Random Sampling codes for BERT experiments
lrcnnres_al: Contains K-Center, Least Confidence, Margin Sampling, Entropy Sampling and Random Sampling codes for experiments on Logistic Regression, CNN and ResNet
bert_marich: Contains Marich codes for BERT experiments
lrcnnres_marich: Contains Marich codes for experiments on Logistic Regression, CNN and ResNet
The jupyter notebooks provided in the folders act as demo for the users.
To experiment with new data, one needs to: 1. In data.py file, add compatible getDATA function. Follow the structure of the existing getDATA functions. 2. In handlers.py file add a compatible Handler class. Follow the structure of the existing Handler classes. 3. In case of Marich new data input is to be given following the jupyter notebooks.
To experiment with new models, one needs to: 1. Add the corresponding model to the nets.py file. For the active learning algorithms, other than Marich, one must remember to modify the model to have a forward method returning the output and a preferred embedding, and have a method to return the embedding dimension.
For the K-Center, Least Confidence, Margin Sampling, Entropy Sampling and Random Sampling experiments, we have modified and used the codes from Huang, Kuan-Hao. "Deepal: Deep active learning in python." 2021. (Link: https://arxiv.org/pdf/2111.15258.pdf)
Reference
If you use or study any part of this repository, please cite it as:
@article{karmakar2023marich,
title={Marich: A Query-efficient Distributionally Equivalent Model Extraction Attack using Public Data},
author={Karmakar, Pratik and Basu, Debabrota},
journal={arXiv preprint arXiv:2302.08466},
year={2023}
}
Owner
- Name: Debabrota Basu
- Login: Debabrota-Basu
- Kind: user
- Location: Paris
- Company: Inria
- Website: https://debabrota-basu.github.io/
- Repositories: 5
- Profile: https://github.com/Debabrota-Basu
Faculty, Inria. Statistics, machine learning, optimization & differential privacy.
Citation (citation.bib)
@article{karmakar23marich,
title = {Marich: A Query-efficient Distributionally Equivalent Model Extraction Attack using Public Data},
author = {Karmakar, Pratik and Basu, Debabrota},
publisher = {arXiv},
year = {2023},
url = {https://arxiv.org/abs/2302.08466}
}
GitHub Events
Total
- Watch event: 1
Last Year
- Watch event: 1