https://github.com/google-deepmind/active_ops
Science Score: 36.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
○Committers with academic emails
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (10.7%) to scientific vocabulary
Repository
Basic Info
- Host: GitHub
- Owner: google-deepmind
- License: apache-2.0
- Language: Python
- Default Branch: main
- Size: 1.48 MB
Statistics
- Stars: 32
- Watchers: 4
- Forks: 3
- Open Issues: 2
- Releases: 0
Metadata Files
README.md
Active Offline Policy Selection
This is supporting example code for NeurIPS 2021 paper Active Offline Policy Selection by Ksenia Konyushkova, Yutian Chen, Tom Le Paine, Caglar Gulcehre, Cosmin Paduraru, Daniel J Mankowitz, Misha Denil, Nando de Freitas.
To simulate the active offline policy selection for a set of policies, one needs
to provide a number of files. We provide the files for 76 policies on
cartpole_swingup environemnt.
Sampled episodic returns for all policies on a number of evalauation episodes (
full-reward-samples-dict.pkl), or a way of sampling a new episode of evaluation upon request for any policy. The filefull-reward-samples-dict.pklcontains a dictionary that maps a policy by its string representation to a numpy.ndarray of of shape (5000,) (number of reward samples).Off-policy evaluation score, such as fitted Q-evaluation (FQE) for all policies (
ope_values.pkl). The fileope_values.pklcontains dictionary that maps policy info into OPE estimates. We provide FQE scores for the policies.Actions that policies take on 1000 randomly sampled states from the offline dataset (
actions.pkl). The fileactions.pklcontains a dictionary with keysactionsandpolicy_keys.actionsis a list of 1000 ( number of states used to compute the kernel) elements of numpy.ndarray type of dimensionality 76x1 (number of policies by the dimensionality of the actions).policy_keyscontains a dictionary mapping from string representation of a policy to the index of that policy in actions.
Installation
To set up the virtual environment, run the following commands.
From within the active_ops directory:
``` python3 -m venv activeopsenv source activeopsenv/bin/activate
pip install --upgrade pip pip install -r requirements.txt ```
To run the demo with colab, enable the jupyter_http_over_ws extension:
jupyter serverextension enable --py jupyter_http_over_ws
Finally, start a server:
jupyter notebook \
--NotebookApp.allow_origin='https://colab.research.google.com' \
--port=8888 \
--NotebookApp.port_retries=0
Usage
To run the code refer to Active_ops_experiment.ipynb colab notebook.
Execute blocks of code one by one to reproduce the final plot. You can modify
various parameters maked by @param to test various baselines in modified
settings. This code loads the example of data for cartpole_environment provided
in the data folder. Using this data, we reproduce the results of Figure 14 of
the paper.
Citing this work
@inproceedings{konyushkovachen2021aops,
title = "Active Offline Policy Selection",
author = "Ksenia Konyushkova, Yutian Chen, Tom Le Paine, Caglar Gulcehre, Cosmin Paduraru, Daniel J Mankowitz, Misha Denil, Nando de Freitas",
booktitle = NeurIPS,
year = 2021
}
Disclaimer
This is not an official Google product.
The datasets in this work are licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Owner
- Name: Google DeepMind
- Login: google-deepmind
- Kind: organization
- Website: https://www.deepmind.com/
- Repositories: 245
- Profile: https://github.com/google-deepmind
GitHub Events
Total
- Watch event: 2
Last Year
- Watch event: 2
Committers
Last synced: 10 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| Ksenia Konyushkova | k****a@g****m | 1 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 10 months ago
All Time
- Total issues: 1
- Total pull requests: 2
- Average time to close issues: 4 months
- Average time to close pull requests: N/A
- Total issue authors: 1
- Total pull request authors: 1
- Average comments per issue: 2.0
- Average comments per pull request: 0.0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 2
Past Year
- Issues: 0
- Pull requests: 1
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 0
- Pull request authors: 1
- Average comments per issue: 0
- Average comments per pull request: 0.0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 1
Top Authors
Issue Authors
- ruipengZ (1)
Pull Request Authors
- dependabot[bot] (4)
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- dm-sonnet ==2.0.0
- jupyter-http-over-ws ==0.0.8
- matplotlib ==3.4.3
- notebook ==6.4.5
- numpy ==1.21.4
- scipy ==1.7.2
- tensorflow ==2.7.0
- tensorflow-probability ==0.14.1