https://github.com/cbg-ethz/bmi
Mutual information estimators and benchmark
Science Score: 33.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
○.zenodo.json file
-
○DOI references
-
✓Academic publication links
Links to: arxiv.org, ieee.org -
✓Committers with academic emails
1 of 2 committers (50.0%) from academic institutions -
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (15.3%) to scientific vocabulary
Keywords
Repository
Mutual information estimators and benchmark
Basic Info
- Host: GitHub
- Owner: cbg-ethz
- License: mit
- Language: Python
- Default Branch: main
- Homepage: https://cbg-ethz.github.io/bmi/
- Size: 1.09 MB
Statistics
- Stars: 50
- Watchers: 4
- Forks: 6
- Open Issues: 10
- Releases: 0
Topics
Metadata Files
README.md
Benchmarking Mutual Information
BMI is the package for estimation of mutual information between continuous random variables and testing new estimators.
- Documentation: https://cbg-ethz.github.io/bmi/
- Source code: https://github.com/cbg-ethz/bmi
- Bug reports: https://github.com/cbg-ethz/bmi/issues
- PyPI package: https://pypi.org/project/benchmark-mi
Getting started
While we recommend taking a look at the documentation to learn about full package capabilities, below we present the main capabilities of the Python package. (Note that BMI can also be used to test non-Python mutual information estimators.)
You can install the package using:
bash
$ pip install benchmark-mi
Alternatively, you can use the development version from source using:
bash
$ pip install "bmi @ https://github.com/cbg-ethz/bmi"
Note: BMI uses JAX and by default installs the CPU version of it. If you have a device supporting CUDA, you can install the CUDA version of JAX.
Now let's take one of the predefined distributions included in the benchmark (named "tasks") and sample 1,000 data points. Then, we will run two estimators on this task.
```python import bmi
task = bmi.benchmark.BENCHMARKTASKS['1v1-normal-0.75'] print(f"Task {task.name} with dimensions {task.dimx} and {task.dimy}") print(f"Ground truth mutual information: {task.mutualinformation:.2f}")
X, Y = task.sample(1000, seed=42)
cca = bmi.estimators.CCAMutualInformationEstimator() print(f"Estimate by CCA: {cca.estimate(X, Y):.2f}")
ksg = bmi.estimators.KSGEnsembleFirstEstimator(neighborhoods=(5,)) print(f"Estimate by KSG: {ksg.estimate(X, Y):.2f}") ```
Evaluating a new estimator
The above code snippet may be convenient for estimating mutual information on a given data set or for the development of a new mutual information estimator.
However, for extensive benchmarking it may be more convenient to use one of the benchmark suites available in the workflows/benchmark/ subdirectory.
For example, you can install Snakemake and run a small benchmark suite on several estimators using:
bash
$ snakemake -c4 -s workflows/benchmark/demo/run.smk
In about a minute it should generate minibenchmark results in the generated/benchmark/demo directory. Note that the configuration file, workflows/benchmark/demo/config.py, explicitly defines the estimators and tasks used, as well as the number of samples.
Hence, it is easy to benchmark a custom estimator by importing it and including it in the configuration dictionary. More information is available here, where we cover evaluating new Python as well as non-Python estimators.
Similarly, it is easy to change the number of samples or adjust the tasks included in the benchmark. We defined several benchmark suites with shared structure.
List of implemented estimators
(Your estimator can be here too! Please, reach out to us if you would like to contribute.)
- The neighborhood-based KSG estimator proposed in Estimating Mutual Information by Kraskov et al. (2003).
- Donsker-Varadhan and MINE estimators proposed in MINE: Mutual Information Neural Estimation by Belghazi et al. (2018).
- InfoNCE estimator proposed in Representation Learning with Contrastive Predictive Coding by Oord et al. (2018).
- NWJ estimator proposed in Estimating divergence functionals and the likelihood ratio by convex risk minimization by Nguyen et al. (2008).
- Estimator based on canonical correlation analysis described in Feature discovery under contextual supervision using mutual information by Kay (1992) and in Some data analyses using mutual information by Brillinger (2004).
References
✨ New! ✨ On the properties and estimation of pointwise mutual information profiles
In this manuscript we discuss the pointwise mutual information profile, an invariant which can be used to diagnose limitations of the previous mutual information benchmark, and a flexible distribution family of Bend and Mix Models. These distributions can be used to create more expressive benchmark tasks and provide model-based Bayesian estimates of mutual information.
Workflows:
- To run the updated version of the benchmark, using Bend and Mix Models, see workflows/benchmark/v2.
- To reproduce the experimental results from the manuscript, see workflows/projects/Mixtures.
@article{
pmi-profiles-2025,
title={On the Properties and Estimation of Pointwise Mutual Information Profiles},
author={Czy{\.z}, Pawe{\l} and Grabowski, Frederic and Vogt, Julia and Beerenwinkel, Niko and Marx, Alexander},
journal={Transactions on Machine Learning Research},
issn={2835-8856},
year={2025},
url={https://openreview.net/forum?id=LdflD41Gn8},
note={}
}
Beyond normal: On the evaluation of the mutual information estimators
In this manuscript we discuss a benchmark for mutual information estimators.
Workflows:
- To run the benchmark, see workflows/benchmark/v1.
- To reproduce the experimental results from the manuscript, see workflows/projects/Beyond_Normal.
@inproceedings{beyond-normal-2023,
title = {Beyond Normal: On the Evaluation of Mutual Information Estimators},
author = {Czy\.{z}, Pawe{\l} and Grabowski, Frederic and Vogt, Julia and Beerenwinkel, Niko and Marx, Alexander},
booktitle = {Advances in Neural Information Processing Systems},
editor = {A. Oh and T. Neumann and A. Globerson and K. Saenko and M. Hardt and S. Levine},
pages = {16957--16990},
publisher = {Curran Associates, Inc.},
url = {https://proceedings.neurips.cc/paper_files/paper/2023/file/36b80eae70ff629d667f210e13497edf-Paper-Conference.pdf},
volume = {36},
year = {2023}
}
Owner
- Name: Computational Biology Group (CBG)
- Login: cbg-ethz
- Kind: organization
- Location: Basel, Switzerland
- Website: https://www.bsse.ethz.ch/cbg
- Twitter: cbg_ethz
- Repositories: 91
- Profile: https://github.com/cbg-ethz
Beerenwinkel Lab at ETH Zurich
GitHub Events
Total
- Issues event: 2
- Watch event: 18
- Delete event: 6
- Issue comment event: 1
- Push event: 13
- Pull request event: 14
- Fork event: 2
- Create event: 8
Last Year
- Issues event: 2
- Watch event: 18
- Delete event: 6
- Issue comment event: 1
- Push event: 13
- Pull request event: 14
- Fork event: 2
- Create event: 8
Committers
Last synced: 9 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| Paweł Czyż | p****z@a****h | 105 |
| Frederic Grabowski | g****c@g****m | 25 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 23
- Total pull requests: 91
- Average time to close issues: 8 months
- Average time to close pull requests: 3 days
- Total issue authors: 3
- Total pull request authors: 2
- Average comments per issue: 0.78
- Average comments per pull request: 0.57
- Merged pull requests: 88
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 1
- Pull requests: 12
- Average time to close issues: about 2 months
- Average time to close pull requests: about 1 hour
- Issue authors: 1
- Pull request authors: 1
- Average comments per issue: 1.0
- Average comments per pull request: 0.08
- Merged pull requests: 11
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- pawel-czyz (25)
- grfrederic (8)
- matthewdmanning (1)
- fengsxy (1)
Pull Request Authors
- pawel-czyz (91)
- grfrederic (23)
- fengsxy (1)
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- black *
- flake8 *
- isort *
- pre-commit *
- pydata-sphinx-theme *
- pytest *
- pytest-cov *
- pytest-xdist *
- pytype *
- sphinx *
- actions/checkout v2 composite
- actions/setup-python v4 composite
- isort/isort-action master composite
- psf/black stable composite
- actions/cache v2 composite
- actions/checkout v3 composite
- actions/setup-python v4 composite
- snok/install-poetry v1 composite
- equinox ^0.10.2
- jax ^0.4.8
- jaxlib ^0.4.7
- numpy ^1.24.2
- optax ^0.1.4
- pandas <2.0.0
- pydantic ^1.10.7
- python >=3.9,<3.11
- pyyaml ^6.0
- scikit-learn ^1.2.2
- scipy ^1.10.1
- tensorflow-probability ^0.20.1
- tqdm ^4.64.1