https://github.com/blutjens/maml_rl

Code for RL experiments in "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"

https://github.com/blutjens/maml_rl

Science Score: 20.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
  • .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Committers with academic emails
    4 of 16 committers (25.0%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (13.2%) to scientific vocabulary

Keywords from Contributors

reinforcement-learning
Last synced: 6 months ago · JSON representation

Repository

Code for RL experiments in "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"

Basic Info
  • Host: GitHub
  • Owner: blutjens
  • License: other
  • Language: Python
  • Default Branch: master
  • Homepage:
  • Size: 7.55 MB
Statistics
  • Stars: 0
  • Watchers: 1
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Fork of cbfinn/maml_rl
Created over 6 years ago · Last pushed over 6 years ago
Metadata Files
Readme Changelog License

README.md

Model-Agnostic Meta-Learning

This repo contains code accompaning the paper, Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks (Finn et al., ICML 2017). It includes code for running the few-shot reinforcement learning experiments.

For the experiments in the supervised domain, see this codebase.

Dependencies

This code is based off of the rllab code repository and can be installed in the same way (see below). This codebase is not necessarily backwards compatible with rllab.

The MAML code uses the TensorFlow rllab version, so be sure to install TensorFlow v1.0+.

Usage

Scripts for running the experiments found in the paper are located in maml_examples/.

The pointmass environment is located in maml_examples/ whereas the MuJoCo environments are located in rllab/envs/mujoco/.

Speed of Code

One current limitation of the code is that it is particularly slow. We welcome contributions to speed it up. We expect the biggest speed improvements to come from better parallelization of sampling and meta-learning graph computation.

Contact

To ask questions or report issues, please open an issue on the issues tracker.

rllab

Docs Circle CI License Join the chat at https://gitter.im/rllab/rllab

rllab is a framework for developing and evaluating reinforcement learning algorithms. It includes a wide range of continuous control tasks plus implementations of the following algorithms:

rllab is fully compatible with OpenAI Gym. See here for instructions and examples.

rllab only officially supports Python 3.5+. For an older snapshot of rllab sitting on Python 2, please use the py2 branch.

rllab comes with support for running reinforcement learning experiments on an EC2 cluster, and tools for visualizing the results. See the documentation for details.

The main modules use Theano as the underlying framework, and we have support for TensorFlow under sandbox/rocky/tf.

Documentation

Documentation is available online: https://rllab.readthedocs.org/en/latest/.

Citing rllab

If you use rllab for academic research, you are highly encouraged to cite the following paper:

Credits

rllab was originally developed by Rocky Duan (UC Berkeley / OpenAI), Peter Chen (UC Berkeley), Rein Houthooft (UC Berkeley / OpenAI), John Schulman (UC Berkeley / OpenAI), and Pieter Abbeel (UC Berkeley / OpenAI). The library is continued to be jointly developed by people at OpenAI and UC Berkeley.

Slides

Slides presented at ICML 2016: https://www.dropbox.com/s/rqtpp1jv2jtzxeg/ICML2016benchmarkingslides.pdf?dl=0

Owner

  • Name: Björn Lütjens (he/him)
  • Login: blutjens
  • Kind: user
  • Company: MIT

Postdoctoral Associate in tackling climate change with AI @ MIT. Project overview at https://blutjens.github.io/

GitHub Events

Total
Last Year

Committers

Last synced: about 2 years ago

All Time
  • Total Commits: 169
  • Total Committers: 16
  • Avg Commits per committer: 10.563
  • Development Distribution Score (DDS): 0.633
Past Year
  • Commits: 0
  • Committers: 0
  • Avg Commits per committer: 0.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
Rocky Duan d****k@g****m 62
Chelsea Finn c****n@e****u 48
Chelsea Finn c****m@g****m 30
Rocky Duan d****k 10
florensacc f****c@b****u 6
Alex Beloi a****i 2
Alex Beloi a****i@s****m 2
Paul Hendricks p****3@o****u 1
David Held d****d@e****u 1
OpenAI server s****s@o****m 1
Zhongwen Xu z****u@g****m 1
singulaire t****u@g****m 1
Ben c****n 1
lchenat l****t@c****k 1
chang cheng m****a@g****m 1
Xiaohu Zhu x****u@g****m 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: about 2 years ago

All Time
  • Total issues: 0
  • Total pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Total issue authors: 0
  • Total pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels

Dependencies

docker/Dockerfile docker
  • ubuntu 16.04 build
environment.yml pypi
  • Cython *
  • Lasagne 484866cf8b38d878e92d521be445968531646bb8
  • Pillow *
  • PyOpenGL *
  • Theano adfe319ce6b781083d8dc3200fb4481b00853791
  • atari-py *
  • awscli *
  • boto3 *
  • cached_property *
  • chainer ==1.15.0
  • ipdb *
  • jupyter *
  • line_profiler *
  • msgpack-python *
  • mujoco_py *
  • nose2 *
  • plotly 2594076e29584ede2d09f2aa40a8a195b3f3fc66
  • progressbar2 *
  • pyglet *
  • pyprind *
  • pyzmq *
setup.py pypi
rllab/mujoco_py/Gemfile rubygems
  • activesupport >= 0
  • pry >= 0
rllab/mujoco_py/Gemfile.lock rubygems
  • activesupport 4.1.8
  • coderay 1.1.0
  • i18n 0.7.0
  • json 1.8.1
  • method_source 0.8.2
  • minitest 5.5.1
  • pry 0.10.1
  • slop 3.6.0
  • thread_safe 0.3.4
  • tzinfo 1.2.2