macad-gym

Multi-Agent Connected Autonomous Driving (MACAD) Gym environments for Deep RL. Code for the paper presented in the Machine Learning for Autonomous Driving Workshop at NeurIPS 2019:

https://github.com/praveen-palanisamy/macad-gym

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org, scholar.google
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (16.8%) to scientific vocabulary

Keywords

autonomous-driving carla carla-driving-simulator carla-gym carla-reinforcement-learning carla-rl carla-simulator deep-reinforcement-learning gym-environments macad-gym multi-agent-autonomous-driving multi-agent-reinforcement-learning
Last synced: 7 months ago · JSON representation ·

Repository

Multi-Agent Connected Autonomous Driving (MACAD) Gym environments for Deep RL. Code for the paper presented in the Machine Learning for Autonomous Driving Workshop at NeurIPS 2019:

Basic Info
Statistics
  • Stars: 365
  • Watchers: 9
  • Forks: 79
  • Open Issues: 16
  • Releases: 4
Topics
autonomous-driving carla carla-driving-simulator carla-gym carla-reinforcement-learning carla-rl carla-simulator deep-reinforcement-learning gym-environments macad-gym multi-agent-autonomous-driving multi-agent-reinforcement-learning
Created almost 7 years ago · Last pushed almost 3 years ago
Metadata Files
Readme Contributing License Citation

README.md

MACAD-Gym learning environment 1 MACAD-Gym is a training platform for Multi-Agent Connected Autonomous Driving (MACAD) built on top of the CARLA Autonomous Driving simulator.

MACAD-Gym provides OpenAI Gym-compatible learning environments for various driving scenarios for training Deep RL algorithms in homogeneous/heterogenous, communicating/non-communicating and other multi-agent settings. New environments and scenarios can be easily added using a simple, JSON-like configuration.

PyPI version fury.io PyPI format Downloads

Quick Start

Install MACAD-Gym using pip install macad-gym. If you have CARLA_SERVER setup, you can get going using the following 3 lines of code. If not, follow the Getting started steps.

Training RL Agents

```python import gym import macad_gym env = gym.make("HomoNcomIndePOIntrxMASS3CTWN3-v0")

Your agent code here

```

Any RL library that supports the OpenAI-Gym API can be used to train agents in MACAD-Gym. The MACAD-Agents repository provides sample agents as a starter.

Visualizing the Environment

To test-drive the environments, you can run the environment script directly. For example, to test-drive the HomoNcomIndePOIntrxMASS3CTWN3-v0 environment, run:

bash python -m macad_gym.envs.homo.ncom.inde.po.intrx.ma.stop_sign_3c_town03

Usage guide

Getting Started

Assumes an Ubuntu (18.04/20.04/22.04 or later) system. If you are on Windows 10/11, use the CARLA Windows package and set the CARLA_SERVER environment variable to the CARLA installation directory.

  1. Install the system requirements:

    • Miniconda/Anaconda 3.x
      • wget -P ~ https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh; bash ~/Miniconda3-latest-Linux-x86_64.sh
    • cmake (sudo apt install cmake)
    • zlib (sudo apt install zlib1g-dev)
    • [optional] ffmpeg (sudo apt install ffmpeg)
  2. Setup CARLA (0.9.x)

    3.1 mkdir ~/software && cd ~/software

    3.2 Example: Download the 0.9.13 release version from: Here Extract it into ~/software/CARLA_0.9.13

    3.3 echo "export CARLA_SERVER=${HOME}/software/CARLA_0.9.13/CarlaUE4.sh" >> ~/.bashrc

  3. Install MACAD-Gym:

    • Option1 for users : pip install macad-gym
    • Option2 for developers:
      • Fork/Clone the repository to your workspace: git clone https://github.com/praveen-palanisamy/macad-gym.git && cd macad-gym
      • Create a new conda env named "macad-gym" and install the required packages: conda env create -f conda_env.yml
      • Activate the macad-gym conda python env: source activate macad-gym
      • Install the macad-gym package: pip install -e .
      • Install CARLA PythonAPI: pip install carla==0.9.13 > NOTE: Change the carla client PyPI package version number to match with your CARLA server version

Learning Platform and Agent Interface

The MACAD-Gym platform provides learning environments for training agents in both, single-agent and multi-agent settings for various autonomous driving tasks and scenarios that enables training agents in homogeneous/heterogeneous The learning environments follows naming convention for the ID to be consistent and to support versioned benchmarking of agent algorithms. The naming convention is illustrated below with HeteCommCoopPOUrbanMgoalMAUSID as an example: MACAD-Gym Naming Conventions

The number of training environments in MACAD-Gym is expected to grow over time (PRs are very welcome!).

Environments

The environment interface is simple and follows the widely adopted OpenAI-Gym interface. You can create an instance of a learning environment using the following 3 lines of code:

python import gym import macad_gym env = gym.make("HomoNcomIndePOIntrxMASS3CTWN3-v0")

Like any OpenAI Gym environment, you can obtain the observation space and action spaces as shown below:

```bash

print(env.observationspace) Dict(car1:Box(168, 168, 3), car2:Box(168, 168, 3), car3:Box(168, 168, 3)) print(env.actionspace) Dict(car1:Discrete(9), car2:Discrete(9), car3:Discrete(9)) ```

To get a list of available environments, you can use the list_available_envs() function as shown in the code snippet below:

python import gym import macad_gym macad_gym.list_available_envs() This will print the available environments. Sample output is provided below for reference:

bash Environment-ID: Short description {'HeteNcomIndePOIntrxMATLS1B2C1PTWN3-v0': 'Heterogeneous, Non-communicating, ' 'Independent,Partially-Observable ' 'Intersection Multi-Agent scenario ' 'with Traffic-Light Signal, 1-Bike, ' '2-Car,1-Pedestrian in Town3, ' 'version 0', 'HomoNcomIndePOIntrxMASS3CTWN3-v0': 'Homogenous, Non-communicating, ' 'Independed, Partially-Observable ' 'Intersection Multi-Agent scenario with ' 'Stop-Sign, 3 Cars in Town3, version 0'}

Agent interface

The Agent-Environment interface is compatible with the OpenAI-Gym interface thus, allowing for easy experimentation with existing RL agent algorithm implementations and libraries. You can use any existing Deep RL library that supports the Open AI Gym API to train your agents.

The basic agent-environment interaction loop is as follows:

```python import gym import macad_gym

env = gym.make("HomoNcomIndePOIntrxMASS3CTWN3-v0") configs = env.configs envconfig = configs["env"] actorconfigs = configs["actors"]

class SimpleAgent(object): def init(self, actorconfigs): """A simple, deterministic agent for an example Args: actorconfigs: Actor config dict """ self.actorconfigs = actorconfigs self.action_dict = {}

def get_action(self, obs):
    """ Returns `action_dict` containing actions for each agent in the env
    """
    for actor_id in self.actor_configs.keys():
        # ... Process obs of each agent and generate action ...
        if env_config["discrete_actions"]:
            self.action_dict[actor_id] = 3  # Drive forward
        else:
            self.action_dict[actor_id] = [1, 0]  # Full-throttle
    return self.action_dict

agent = SimpleAgent(actorconfigs) # Plug-in your agent or use MACAD-Agents for ep in range(2): obs = env.reset() done = {"all": False} step = 0 while not done["all"]: obs, reward, done, info = env.step(agent.getaction(obs)) print(f"Step#:{step} Rew:{reward} Done:{done}") step += 1 env.close() ```

Citing:

If you find this work useful in your research, please cite:

bibtex @misc{palanisamy2019multiagent, title={Multi-Agent Connected Autonomous Driving using Deep Reinforcement Learning}, author={Praveen Palanisamy}, year={2019}, eprint={1911.04175}, archivePrefix={arXiv}, primaryClass={cs.LG} }

Citation in other Formats: (Click to View)

MLA
Palanisamy, Praveen. "Multi-Agent Connected Autonomous Driving using Deep Reinforcement Learning." arXiv preprint arXiv:1911.04175 (2019).
APA
Palanisamy, P. (2019). Multi-Agent Connected Autonomous Driving using Deep Reinforcement Learning. arXiv preprint arXiv:1911.04175.
Chicago
Palanisamy, Praveen. "Multi-Agent Connected Autonomous Driving using Deep Reinforcement Learning." arXiv preprint arXiv:1911.04175 (2019).
Harvard
Palanisamy, P., 2019. Multi-Agent Connected Autonomous Driving using Deep Reinforcement Learning. arXiv preprint arXiv:1911.04175.
Vancouver
Palanisamy P. Multi-Agent Connected Autonomous Driving using Deep Reinforcement Learning. arXiv preprint arXiv:1911.04175. 2019 Nov 11.

NOTEs:

  • MACAD-Gym supports multi-GPU setups and it will choose the GPU that is less loaded to launch the simulation needed for the RL training environment

  • MACAD-Gym is for CARLA 0.9.x & above . If you are looking for an OpenAI Gym-compatible agent learning environment for CARLA 0.8.x (stable release), use this carla_gym environment.

Owner

  • Name: Praveen Palanisamy
  • Login: praveen-palanisamy
  • Kind: user
  • Company: @microsoft

Citation (CITATION.cff)

cff-version: 1.2.0
message: If you use this software, please cite it as below.
title: "MACAD-Gym, Multi-Agent Reinforcement Learning for Connected Autonomous Driving"
authors:
  - family-names: Palanisamy
    given-names: Praveen
    orcid: "https://orcid.org/0000-0001-9069-3071"
version: 0.1.4
doi: "10.5281/zenodo.4053994"
date-released: 2020-09-27
url: "https://https://github.com/praveen-palanisamy/macad-gym"
preferred-citation:
  type: conference-paper
  title: "MACAD-Gym: Multi-Agent Reinforcement Learning for Connected Autonomous Driving"
  authors:
    - family-names: Palanisamy
      given-names: Praveen
      orcid: "https://orcid.org/0000-0001-9069-3071"
  doi: "10.1109/IJCNN48605.2020.9207663"
  collection-title: "2020 International Joint Conference on Neural Networks (IJCNN)"
  collection-type: proceedings
  year: 2020
  publisher:
    name: IEEE
  url: "https://ieeexplore.ieee.org/document/9207663"

GitHub Events

Total
  • Issues event: 1
  • Watch event: 31
  • Fork event: 2
Last Year
  • Issues event: 1
  • Watch event: 31
  • Fork event: 2

Committers

Last synced: 11 months ago

All Time
  • Total Commits: 374
  • Total Committers: 3
  • Avg Commits per committer: 124.667
  • Development Distribution Score (DDS): 0.008
Past Year
  • Commits: 0
  • Committers: 0
  • Avg Commits per committer: 0.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
Praveen Palanisamy p****y@o****m 371
David Nie 5****g 2
Giovanni Minelli g****3@g****m 1

Issues and Pull Requests

Last synced: 7 months ago

All Time
  • Total issues: 46
  • Total pull requests: 28
  • Average time to close issues: about 1 month
  • Average time to close pull requests: 20 days
  • Total issue authors: 30
  • Total pull request authors: 6
  • Average comments per issue: 3.98
  • Average comments per pull request: 0.86
  • Merged pull requests: 20
  • Bot issues: 0
  • Bot pull requests: 1
Past Year
  • Issues: 1
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 1
  • Pull request authors: 0
  • Average comments per issue: 0.0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • Morphlng (4)
  • Yiquan-lol (3)
  • eerkaijun (3)
  • AizazSharif (3)
  • Panshark (2)
  • SExpert12 (2)
  • lcipolina (2)
  • zengsh-cqupt (2)
  • Kinvy66 (2)
  • Neel1302 (2)
  • SHITIANYU-hue (2)
  • qiangyuchuan (1)
  • tbienhoff (1)
  • hjh0119 (1)
  • CHWLW (1)
Pull Request Authors
  • praveen-palanisamy (17)
  • Morphlng (3)
  • johnMinelli (3)
  • SHITIANYU-hue (2)
  • lcipolina (2)
  • lgtm-com[bot] (1)
Top Labels
Issue Labels
question (13) more-information-needed (5) help wanted (1) enhancement (1) good first issue (1)
Pull Request Labels
enhancement (2)

Packages

  • Total packages: 2
  • Total downloads:
    • pypi 21 last-month
  • Total dependent packages: 0
    (may contain duplicates)
  • Total dependent repositories: 1
    (may contain duplicates)
  • Total versions: 7
  • Total maintainers: 1
proxy.golang.org: github.com/praveen-palanisamy/macad-gym
  • Versions: 3
  • Dependent Packages: 0
  • Dependent Repositories: 0
Rankings
Dependent packages count: 5.4%
Average: 5.6%
Dependent repos count: 5.8%
Last synced: 7 months ago
pypi.org: macad-gym

Learning environments for Multi-Agent Connected Autonomous Driving (MACAD) with OpenAI Gym compatible interfaces

  • Versions: 4
  • Dependent Packages: 0
  • Dependent Repositories: 1
  • Downloads: 21 Last month
Rankings
Stargazers count: 3.8%
Forks count: 5.5%
Dependent packages count: 10.1%
Average: 13.9%
Dependent repos count: 21.5%
Downloads: 28.6%
Maintainers (1)
Last synced: 7 months ago

Dependencies

setup.py pypi
  • GPUtil *
  • carla >=0.9.3
  • gym *
  • networkx *
  • opencv-python *
  • pygame *
.github/workflows/publish-to-pypi-test-net.yml actions
  • actions/checkout master composite
  • actions/setup-python v2 composite
  • pypa/gh-action-pypi-publish master composite
.github/workflows/python-publish.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v3 composite
  • pypa/gh-action-pypi-publish 27b31702a0e7fc50959f5ad993c78deac1bdfc29 composite
src/macad_gym/carla/PythonAPI/setup.py pypi