https://github.com/acerbilab/amortized-conditioning-engine
Amortized Probabilistic Conditioning for Optimization, Simulation and Inference (Chang et al., AISTATS 2025)
Science Score: 36.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (16.5%) to scientific vocabulary
Keywords
Repository
Amortized Probabilistic Conditioning for Optimization, Simulation and Inference (Chang et al., AISTATS 2025)
Basic Info
- Host: GitHub
- Owner: acerbilab
- License: apache-2.0
- Language: Jupyter Notebook
- Default Branch: main
- Homepage: https://acerbilab.github.io/amortized-conditioning-engine/
- Size: 285 MB
Statistics
- Stars: 19
- Watchers: 3
- Forks: 1
- Open Issues: 1
- Releases: 0
Topics
Metadata Files
README.md
Amortized Probabilistic Conditioning for Optimization, Simulation and Inference
This repository provide the implementation and code used in the AISTATS 2025 article Amortized Probabilistic Conditioning for Optimization, Simulation and Inference (Chang et al., 2025).
- See the paper web page for more information.
- The full paper is available on arXiv and as Markdown files.
Installation with Anaconda
To install the required dependencies, run:
bash
conda install python=3.9.19 pytorch=2.2.0 torchvision=0.17.0 torchaudio=2.2.0 -c pytorch
pip install -e .
Demos
We provide three demo notebooks for a quick tour with examples of our method, the Amortized Conditioning Engine (ACE):
1.MNIST_demo.ipynb: Image completion demo with MNIST.2.BO_demo.ipynb: Bayesian optimization demo.3.SBI_demo.ipynb: Simulation-based inference demo.
Each notebook demonstrates a specific application of ACE. Simply open the notebooks in Jupyter or in GitHub to visualize the demos.
Citation
If you find this work valuable for your research, please consider citing our paper:
@article{chang2025amortized,
title={Amortized Probabilistic Conditioning for Optimization, Simulation and Inference},
author={Chang, Paul E and Loka, Nasrulloh and Huang, Daolang and Remes, Ulpu and Kaski, Samuel and Acerbi, Luigi},
journal={28th Int. Conf. on Artificial Intelligence & Statistics (AISTATS 2025)},
year={2025}
}
License
This code is released under the Apache 2.0 License.
Training and Running experiments
Regression
Training GP:
bash
python train.py -m dataset=gp_sampler_kernel
Training MNIST:
bash
python train.py -m dataset=image_sampler
Training CelebA:
bash
python -m train.py dataset=image_sampler_celeb embedder=embedder_marker_skipcon_celeb
CelebA dataset must be downloaded and stored in data/celeba.
Bayesian Optimization
Training BO
Before training the model, we first need to generate offline datasets using the following command:
bash
python -m src.dataset.optimization.offline_bo_data_generator_prior -m dataset=offline_bo_prior_1d
This will generate offline datasets for both the prior and non-prior cases. In the non-prior case, the prior information is omitted. The offline data will be saved in offline_data/bonprior. Once the offline data is generated, we can proceed with training the models.
Training can be performed using the following commands:
bash
python train.py -m dataset=offline_bo_1d #for non prior case
or
bash
python train.py -m dataset=offline_bo_prior_1d #for prior case
The resulting model checkpoint (.ckpt) and Hydra configuration will be saved in multirun/ folder.
Below is the full script to reproduce the models used in the paper:
```bash
Data generation for 1-6 dimensions case
python -m src.dataset.optimization.offlinebodatageneratorprior -m dataset=offlineboprior1d,offlineboprior2d,offlineboprior3d,offlineboprior4d,offlineboprior5d,offlineboprior6d
Training for 1-3 dimension
python train.py -m dataset=offlinebo1d,offlinebo2d,offlinebo3d encoder=tnpddm256df128l6h16 numsteps=500000 batch_size=64
Training for 1-3 dimension with prior
python train.py -m dataset=offlineboprior1d,offlineboprior2d,offlineboprior3d encoder=tnpddm256df128l6h16 numsteps=500000 batchsize=64
Training for 4-6 dimension
python train.py -m dataset=offlinebo4d,offlinebo5d,offlinebo6d encoder=tnpddm128df512l6h8 numsteps=350000 batch_size=128
```
Running BO Experiments
After training the models, the next step is to run the BO experiments on the benchmark functions. This can be done as follows:
Navigate to the
experiments/bofolder.Run the following script, specifying the benchmark functions:
bash
sh run_bo.sh 1d_ackley 10 results/bo_run/
This runs the Ackley function with 10 repetitions and saves the results in the specified folder.
- Once the experiment is complete, plot the results using:
bash python bo_plot.py result_path=results/bo_run/ plot_path=results/bo_plot/
Reproducing the Experiments
To fully reproduce the experiments, first ensure that the trained models are saved in the appropriate location before running the scripts.
In the default settings, the trained models are saved in models_ckpt/ folder, also see individual .yml files in tge cfgs/benchmark folder to see the full path of the models.
Trained models for BO are available to be downloaded trough this link.
Full script to reproduce the experiments is as folows:
```bash
no prior experiments
main paper experiments
sh runbo.sh 1dgramacylee 10 results/borun/ sh runbo.sh 2dbraninscaled 10 results/borun/ sh runbo.sh 3dhartmann 10 results/borun/ sh runbo.sh 4drosenbrock 10 results/borun/ sh runbo.sh 5drosenbrock 10 results/borun/ sh runbo.sh 6dhartmann 10 results/borun/ sh runbo.sh 6dlevy 10 results/bo_run/
extended experiments in appendix
sh runbo.sh 1dackley 10 results/borun/ sh runbo.sh 1dnegeasom 10 results/borun/ sh runbo.sh 2dmichalewicz 10 results/borun/ sh runbo.sh 2dackley 10 results/borun/ sh runbo.sh 3dlevy 10 results/borun/ sh runbo.sh 4dhartmann 10 results/borun/ sh runbo.sh 5Dgriewank 10 results/borun/ sh runbo.sh 6Dgriewank 10 results/bo_run/
plotting results
python boplot.py resultpath=results/borun/ plotpath=results/bo_plot/ ``` For with-prior experiments we also need to specify the standard deviation of the gaussian prior (see the manuscript for more detail).
Full script to reproduce the prior experiments is as follows:
```bash
with prior experiments
main paper with prior experiments
sh runboprior.sh 2dmichalewiczprior 0.5 10 results/borunweakprior/ sh runboprior.sh 2dmichalewiczprior 0.2 10 results/borunstrongprior/
sh runboprior.sh 3dlevyprior 0.5 10 results/borunweakprior/ sh runboprior.sh 3dlevyprior 0.2 10 results/borunstrongprior/
extended experiments with prior in appendix
sh runboprior.sh 1dackleyprior 0.5 10 results/borunweakprior/ sh runboprior.sh 1dackleyprior 0.2 10 results/borunstrongprior/
sh runboprior.sh 1dgramacyleeprior 0.5 10 results/borunweakprior/ sh runboprior.sh 1dgramacyleeprior 0.2 10 results/borunstrongprior/
sh runboprior.sh 1dnegeasomprior 0.5 10 results/borunweakprior/ sh runboprior.sh 1dnegeasomprior 0.2 10 results/borunstrongprior/
sh runboprior.sh 2dbraninscaledprior 0.5 10 results/borunweakprior/ sh runboprior.sh 2dbraninscaledprior 0.2 10 results/borunstrongprior/
sh runboprior.sh 2dackleyprior 0.5 10 results/borunweakprior/ sh runboprior.sh 2dackleyprior 0.2 10 results/borunstrongprior/
sh runboprior.sh 3dhartmannprior 0.5 10 results/borunweakprior/ sh runboprior.sh 3dhartmannprior 0.2 10 results/borunstrongprior/
plotting results
python boplot.py resultpath=results/borunstrongprior/ plotpath=results/boplot/ prefixfilename="strongprior" python boplot.py resultpath=results/borunweakprior/ plotpath=results/boplot/ prefixfilename="weakprior" ```
Simulation-based Inference
Training SBI tasks
Before training the model, we first need to generate offline datasets using the following command:
bash
python -m src.dataset.sbi.oup
python -m src.dataset.sbi.sir
python -m src.dataset.sbi.turin
This will generate offline datasets for both the prior and non-prior cases. In the non-prior case, the prior information is omitted.
The offline data will be saved in data/. Once the offline data is generated, we can proceed with training the models.
Training can be performed using the following commands: ```bash
Non-prior case
python trainsbi.py dataset=oup embedder=embeddermarkerskipcon python trainsbi.py dataset=sir embedder=embeddermarkerskipcon python trainsbi.py dataset=turin embedder=embeddermarker_skipcon
prior-injection case
python trainsbi.py dataset=oupprior embedder=embeddermarkerpriorsbi python trainsbi.py dataset=sirprior embedder=embeddermarkerpriorsbi python trainsbi.py dataset=turinprior embedder=embeddermarkerprior_sbi ```
Evaluating SBI tasks
We include the trained models for all SBI tasks under results/SIMULATOR. E.g., for standard OUP task, the trained ACE models and corresponding configs are under results/oup for ACE and results/oup_pi for ACEP.
You can then use the notebooks in experiments/sbi to evaluate the models on the SBI tasks. Each notebook contains the NPE and NRE baselines (we also saved the trained models), also including the evaluation code used to create the Table 1 in our paper, and all the visualization code. We use latex to create the plot, if you don't have latex in your local machine, you can set "text.usetex": False in update_plot_style() function under sbi_demo_utils.py.
Owner
- Name: acerbilab
- Login: acerbilab
- Kind: organization
- Location: Finland
- Website: www.helsinki.fi/machine-and-human-intelligence
- Repositories: 6
- Profile: https://github.com/acerbilab
Machine and Human Intelligence Research Group - University of Helsinki
GitHub Events
Total
- Issues event: 1
- Watch event: 20
- Delete event: 2
- Member event: 2
- Push event: 81
- Pull request event: 16
- Fork event: 1
- Create event: 12
Last Year
- Issues event: 1
- Watch event: 20
- Delete event: 2
- Member event: 2
- Push event: 81
- Pull request event: 16
- Fork event: 1
- Create event: 12
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 1
- Total pull requests: 17
- Average time to close issues: N/A
- Average time to close pull requests: about 13 hours
- Total issue authors: 1
- Total pull request authors: 3
- Average comments per issue: 0.0
- Average comments per pull request: 0.0
- Merged pull requests: 17
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 1
- Pull requests: 17
- Average time to close issues: N/A
- Average time to close pull requests: about 13 hours
- Issue authors: 1
- Pull request authors: 3
- Average comments per issue: 0.0
- Average comments per pull request: 0.0
- Merged pull requests: 17
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- edchangy11 (1)
Pull Request Authors
- satrialoka (9)
- huangdaolang (4)
- edchangy11 (4)
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- attrdict ==2.0.1
- hydra-core ==1.3.2
- hydra-submitit-launcher ==1.2.0
- matplotlib ==3.8.0
- sbi ==0.22.0
- scikit-learn ==1.5.0
- seaborn ==0.13.2
- torch ==2.4.1
- torchaudio ==2.4.1
- torchvision ==0.19.1
- wandb ==0.16.3