https://github.com/alfa-group/bayesopt-nash-eq

https://github.com/alfa-group/bayesopt-nash-eq

Science Score: 10.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
  • .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (15.0%) to scientific vocabulary

Keywords

algorithm bayesian-optimization black-box-optimization game-theoretic-algorithms game-theory gp nash-equilibrium python robust-optimization
Last synced: 5 months ago · JSON representation

Repository

Basic Info
  • Host: GitHub
  • Owner: ALFA-group
  • Language: Jupyter Notebook
  • Default Branch: master
  • Size: 5.02 MB
Statistics
  • Stars: 8
  • Watchers: 5
  • Forks: 3
  • Open Issues: 0
  • Releases: 0
Topics
algorithm bayesian-optimization black-box-optimization game-theoretic-algorithms game-theory gp nash-equilibrium python robust-optimization
Created almost 8 years ago · Last pushed over 7 years ago

https://github.com/ALFA-group/bayesopt-nash-eq/blob/master/

# bayesopt-nash-eq

Code repository for [Approximating Nash Equilibria for Black-Box
Games: A Bayesian Optimization Approach](https://arxiv.org/pdf/1804.10586.pdf)


### Setup / Python environment

```
conda install nb_conda
conda env create --file environment.yml
```

This will create an environment called ne.

To activate this environment, execute:

```
source activate ne
```

### Running the demo

The `demo.ipynb` demonstrates the algorithm along with the algorithms considered in the paper. It also demonstrates other utils/plots that can be found in the repo.

```
jupyter notebook
```

### Running Experiments

The script `toy_experiments.py` performs experiments on `SADDLE` and `MOP` problems. The experiment can be configured according to a configuration files, as follows. `cd` to the repo directory and do the following:

```
export PYTHONPATH=.
python ne/experiments/toy_experiments.py -f ne/experiments/configs/saddle_config.yml
```

As the experiment is running, results of different runs/algorithms/problems will be stored in `ne/experiments/res` as `{experiment_name}_{alg_name}_{dimension}_{run_number}.json`, these files are helpful for backup/monitoring purposes. At the end of the experiment a json file `{experiment_name}.json` will be generated which essentially concatentates all `{experiment_name}_*.json`

### Plotting Results

The `demo.ipynb` demonstrates how the results can be plotted. Moreover, a `json` file whose format is similar to that created by `toy_experiments.py` can be passed to the `plot_regret_trace` function under `ne/utils/plots.py` as demonstrated in the `main` block of `ne/utils/plots.py`. The results of the `saddle_config.yml` experiment is stored
in `ne/experiments/res/saddle_res.json` 

For the experiment defined above, the `plots.py` script is set to display its result:

```
python ne/utils/plots.py
```


### GPGame Interface

To install from Python

```
from rpy2.robjects.packages import importr
utils = importr('utils')
utils.install_packages('GPGame')
```

There might be some difficulty in installing the "GPGame" package and interfacing it with Python. Make sure you install the package (and all the required packages) with `sudo`. Then copy it to the environment's `R/library` 

```
sudo R
>> install.packages("GPGame")
>> quit()
sudo cp -a ~/R/x86_64-pc-linux-gnu-library/3.2/. ~/anaconda2/envs/ne/lib/R/library/
```

### Running Multithreaded experiments

There is unintended multithreading with numpy ( https://stackoverflow.com/questions/19257070/unintented-multithreading-in-python-scikit-learn )

Check the blas/lapack library used and set the number of threads. E.g.

```
export OPENBLAS_NUM_THREADS=1
```

### Citation

If you make use of this code and you'd like to cite us, please consider the following:

```
@article{al2018approximating,
  title={Approximating Nash Equilibria for Black-Box Games: A Bayesian Optimization Approach},
  author={Al-Dujaili, Abdullah and Hemberg, Erik and O'Reilly, Una-May},
  journal={arXiv preprint arXiv:1804.10586},
  year={2018}
}
```

Owner

  • Name: Anyscale Learning For All (ALFA)
  • Login: ALFA-group
  • Kind: organization
  • Email: alfa-apply@csail.mit.edu
  • Location: Cambridge, MA, USA

Scalable machine learning technology, Adversarial AI, Evolutionary algorithms, and data science frameworks.

GitHub Events

Total
Last Year

Dependencies

environment.yml pypi
  • cma *
  • pydoe *
  • sklearn *