https://github.com/amazon-science/meta-q-learning

Code for the paper "Meta-Q-Learning"( ICLR 2020)

https://github.com/amazon-science/meta-q-learning

Science Score: 10.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
  • .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (14.1%) to scientific vocabulary

Keywords

deep-learning meta-learning multi-task-learning reinforcement-learning
Last synced: 5 months ago · JSON representation

Repository

Code for the paper "Meta-Q-Learning"( ICLR 2020)

Basic Info
Statistics
  • Stars: 103
  • Watchers: 4
  • Forks: 18
  • Open Issues: 1
  • Releases: 0
Topics
deep-learning meta-learning multi-task-learning reinforcement-learning
Created over 5 years ago · Last pushed over 3 years ago

https://github.com/amazon-science/meta-q-learning/blob/master/

Meta-Q-Learning
=============================================
This paper introduces Meta-Q-Learning (MQL), a new off-policy algorithm for meta-Reinforcement Learning (meta-RL). MQL builds upon three simple ideas. First, we show that Q-learning is competitive with state-of-the-art meta-RL algorithms if given access to a context variable that is a representation of the past trajectory. Second, a multi-task objective to maximize the average reward across the training tasks is an effective method to meta-train RL policies. Third, past data from the meta-training replay buffer can be recycled to adapt the policy on a new task using off-policy updates. MQL draws upon ideas in propensity estimation to do so and thereby amplifies the amount of available data for adaptation. Experiments on standard continuous-control benchmarks suggest that MQL compares favorably with the state of the art in meta-RL.

This repository provides the implementation of [Meta-Q-learning](https://arxiv.org/abs/1910.00125). If you use this code please cite the paper using the bibtex reference below.

```
@misc{fakoor2019metaqlearning,
    title={Meta-Q-Learning},
    author={Rasool Fakoor and Pratik Chaudhari and Stefano Soatto and Alexander J. Smola},
    year={2019},
    eprint={1910.00125},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}
```
## Getting Started
```
python run_script.py --env cheetah-dir --gpu_id 0 --seed 0
```

'env' can be humanoid-dir, ant-dir, cheetah-vel, cheetah-dir, ant-goal, and walker-rand-params. The code works on GPU and CPU machine. For the experiments in this paper, we used [p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/). For complete list of hyperparameters, please refer to the paper appendix.


In order to run this code, you will need to install Pytorch and MuJoCo. If you face any problem, please follow [PEARL](https://github.com/katerakelly/oyster/) steps to install.  

## New Environments
In order to run code with a new environment, you will need to first define an entry in ./configs/pearl_envs.json. Look at ./configs/abl_envs.json as a reference. In addation, you will need to add an env's code to rlkit/env/.

## Acknowledgement
- **rand_param_envs** and **rlkit** are completely based/copied on/from following repositories:
[rand_param_envs](https://github.com/dennisl88/rand_param_envs/tree/4d1529d61ca0d65ed4bd9207b108d4a4662a4da0) and
[PEARL](https://github.com/katerakelly/oyster/). Thanks to their authors to make them available.
We include them here to make it easier to run and work with this repository.


## License

This code is licensed under the CC-BY-NC-4.0 License.

# Contact

Please open an issue on [issues tracker](https://github.com/amazon-research/meta-q-learning/issues) to report problems or to ask questions or send an email to me, [Rasool Fakoor](https://github.com/rasoolfa).

Owner

  • Name: Amazon Science
  • Login: amazon-science
  • Kind: organization

GitHub Events

Total
  • Issues event: 1
  • Watch event: 2
  • Fork event: 2
Last Year
  • Issues event: 1
  • Watch event: 2
  • Fork event: 2

Committers

Last synced: 9 months ago

All Time
  • Total Commits: 10
  • Total Committers: 2
  • Avg Commits per committer: 5.0
  • Development Distribution Score (DDS): 0.1
Past Year
  • Commits: 0
  • Committers: 0
  • Avg Commits per committer: 0.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
rasoolfakoor 7****r 9
Amazon GitHub Automation 5****o 1

Issues and Pull Requests

Last synced: 9 months ago

All Time
  • Total issues: 7
  • Total pull requests: 0
  • Average time to close issues: about 1 month
  • Average time to close pull requests: N/A
  • Total issue authors: 5
  • Total pull request authors: 0
  • Average comments per issue: 1.86
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 3
  • Pull requests: 0
  • Average time to close issues: 7 days
  • Average time to close pull requests: N/A
  • Issue authors: 1
  • Pull request authors: 0
  • Average comments per issue: 2.33
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • nil123532 (3)
  • SARAtsu (1)
  • xjz89982 (1)
  • Niyx52094 (1)
  • XyDrKRulof (1)
Pull Request Authors
Top Labels
Issue Labels
question (1)
Pull Request Labels