https://github.com/cair/deep-rts

A Real-Time-Strategy game for Deep Learning research

https://github.com/cair/deep-rts

Science Score: 26.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
  • DOI references
    Found 1 DOI reference(s) in README
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (16.1%) to scientific vocabulary

Keywords

ai artificial-intelligence cpp deep-learning deep-reinforcement-learning game machine-learning neural-networks per-arne python reinforcement-learning tree-search
Last synced: 5 months ago · JSON representation

Repository

A Real-Time-Strategy game for Deep Learning research

Basic Info
  • Host: GitHub
  • Owner: cair
  • License: mit
  • Language: C++
  • Default Branch: main
  • Homepage:
  • Size: 16.2 MB
Statistics
  • Stars: 234
  • Watchers: 8
  • Forks: 40
  • Open Issues: 14
  • Releases: 3
Topics
ai artificial-intelligence cpp deep-learning deep-reinforcement-learning game machine-learning neural-networks per-arne python reinforcement-learning tree-search
Created almost 9 years ago · Last pushed almost 3 years ago
Metadata Files
Readme

README.md

Description Build Status Documentation GitHub license

DeepRTS is a high-performance Real-TIme strategy game for Reinforcement Learning research. It is written in C++ for performance, but provides an python interface to better interface with machine-learning toolkits. Deep RTS can process the game with over 6 000 000 steps per second and 2 000 000 steps when rendering graphics. In comparison to other solutions, such as StarCraft, this is over 15 000% faster simulation time running on Intel i7-8700k with Nvidia RTX 2080 TI.

The aim of Deep RTS is to bring a more affordable and sustainable solution to RTS AI research by reducing computation time.

It is recommended to use the master-branch for the newest (and usually best) version of the environment. I am greatful for any input in regards to improving the environment.

Please use the following citation when using this in your work! @INPROCEEDINGS{8490409, author={P. {Andersen} and M. {Goodwin} and O. {Granmo}}, booktitle={2018 IEEE Conference on Computational Intelligence and Games (CIG)}, title={Deep RTS: A Game Environment for Deep Reinforcement Learning in Real-Time Strategy Games}, year={2018}, volume={}, number={}, pages={1-8}, keywords={computer games;convolution;feedforward neural nets;learning (artificial intelligence);multi-agent systems;high-performance RTS game;artificial intelligence research;deep reinforcement learning;real-time strategy games;computer games;RTS AIs;Deep RTS game environment;StarCraft II;Deep Q-Network agent;cutting-edge artificial intelligence algorithms;Games;Learning (artificial intelligence);Machine learning;Planning;Ground penetrating radar;Geophysical measurement techniques;real-time strategy game;deep reinforcement learning;deep q-learning}, doi={10.1109/CIG.2018.8490409}, ISSN={2325-4270}, month={Aug},}

Dependencies

  • Python >= 3.9.1

Installation

Method 1 (From Git Repo)

sudo pip3 install git+https://github.com/cair/DeepRTS.git

Method 2 (Clone & Build)

git clone https://github.com/cair/deep-rts.git cd deep-rts git submodule sync git submodule update --init sudo pip3 install .

Available maps

10x10-2-FFA 15x15-2-FFA 21x21-2-FFA 31x31-2-FFA 31x31-4-FFA 31x31-6-FFA

Scenarios

Deep RTS features scenarios which is pre-built mini-games. These mini-games is well suited to train agents on specific tasks, or to test algorithms in different problem setups. The benefits of using scenarios is that you can trivially design reward functions using criterias that each outputs a reward/punishment signal depending on completion of the task. Examples of tasks are to: * collect 1000 gold * do 100 damage * take 1000 damage * defeat 5 enemies

Deep RTS currently implements the following scenarios GoldCollectFifteen GeneralAIOneVersusOne

Minimal Example

```python import random from DeepRTS.python import Config from DeepRTS.python import scenario

if name == "main": random_play = True episodes = 100

for i in range(episodes):
    env = scenario.GeneralAI_1v1(Config.Map.THIRTYONE)
    state = env.reset()
    done = False

    while not done:
        env.game.set_player(env.game.players[0])
        action = random.randrange(15)
        next_state, reward, done, _ = env.step(action)
        state = next_state

        if (done):
            break

        env.game.set_player(env.game.players[1])
        action = random.randrange(15)
        next_state, reward, done, _ = env.step(action)
        state = next_state

```

In-Game Footage

10x10 - 2 Player - free-for-all

15x15 - 2 Player - free-for-all

21x21 - 2 Player - free-for-all

31x31 - 2 Player - free-for-all

31x31 - 4 Player - free-for-all

31x3 - 6 Player - free-for-all

Owner

  • Name: Centre for Artificial Intelligence Research (CAIR)
  • Login: cair
  • Kind: organization
  • Email: cair-internal@uia.no
  • Location: Grimstad, Norway

CAIR is a centre for research excellence on artificial intelligence at the University of Agder. We attack unsolved problems, seeking superintelligence.

GitHub Events

Total
  • Watch event: 30
  • Issue comment event: 1
  • Pull request event: 2
  • Fork event: 6
Last Year
  • Watch event: 30
  • Issue comment event: 1
  • Pull request event: 2
  • Fork event: 6

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 17
  • Total pull requests: 41
  • Average time to close issues: 3 months
  • Average time to close pull requests: 3 months
  • Total issue authors: 15
  • Total pull request authors: 6
  • Average comments per issue: 3.35
  • Average comments per pull request: 0.59
  • Merged pull requests: 12
  • Bot issues: 0
  • Bot pull requests: 26
Past Year
  • Issues: 1
  • Pull requests: 2
  • Average time to close issues: N/A
  • Average time to close pull requests: 3 months
  • Issue authors: 1
  • Pull request authors: 1
  • Average comments per issue: 0.0
  • Average comments per pull request: 0.0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • marcocspc (2)
  • KornbergFresnel (2)
  • skandermoalla (1)
  • khcf123 (1)
  • leestar (1)
  • perara (1)
  • JOST777 (1)
  • maswin (1)
  • A-Kotecha (1)
  • onursahin93 (1)
  • osoblanco (1)
  • Skyfrei (1)
  • SimpleConjugate (1)
  • yjcQAQ (1)
  • ayush1710 (1)
Pull Request Authors
  • dependabot[bot] (26)
  • perara (8)
  • maswin (2)
  • skandermoalla (2)
  • AndreasEike (2)
  • Yigit-Arisoy (1)
Top Labels
Issue Labels
help wanted (1)
Pull Request Labels
dependencies (26)

Dependencies

requirements.txt pypi
  • cython *
  • gym *
  • numpy *
  • pybind11 *
  • pygame ==2.0.0
setup.py pypi
  • numpy *
Dockerfile docker
  • ubuntu 18.04 build
.github/workflows/test_vcpkg_install.yaml actions
  • actions/checkout v2 composite
examples/requirements.txt pypi
  • aiohttp *
  • pandas *
  • plotly *
  • psutil *
  • pygame *
  • pygments *
  • ray *
  • requests *
  • setproctitle *
  • tensorboard *
  • tensorboardX *
  • tensorflow-gpu *
  • torch *
  • uvicorn *
pyproject.toml pypi