https://github.com/google-research/football

Check out the new game server:

https://github.com/google-research/football

Science Score: 33.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Committers with academic emails
    1 of 24 committers (4.2%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (15.8%) to scientific vocabulary

Keywords

reinforcement-learning reinforcement-learning-environments

Keywords from Contributors

distributed deep-neural-networks
Last synced: 6 months ago · JSON representation

Repository

Check out the new game server:

Basic Info
  • Host: GitHub
  • Owner: google-research
  • License: apache-2.0
  • Language: Python
  • Default Branch: master
  • Homepage: https://research-football.dev
  • Size: 26.5 MB
Statistics
  • Stars: 3,490
  • Watchers: 92
  • Forks: 1,326
  • Open Issues: 80
  • Releases: 11
Topics
reinforcement-learning reinforcement-learning-environments
Created almost 7 years ago · Last pushed 10 months ago
Metadata Files
Readme Changelog Contributing License

README.md

Google Research Football

This repository contains an RL environment based on open-source game Gameplay Football.
It was created by the Google Brain team for research purposes.

Useful links:

We'd like to thank Bastiaan Konings Schuiling, who authored and open-sourced the original version of this game.

Quick Start

In colab

Open our example Colab, that will allow you to start training your model in less than 2 minutes.

This method doesn't support game rendering on screen - if you want to see the game running, please use the method below.

Using Docker

This is the recommended way for Linux-based systems to avoid incompatible package versions. Instructions are available here.

On your computer

1. Install required packages

Linux

```shell sudo apt-get install git cmake build-essential libgl1-mesa-dev libsdl2-dev \ libsdl2-image-dev libsdl2-ttf-dev libsdl2-gfx-dev libboost-all-dev \ libdirectfb-dev libst-dev mesa-utils xvfb x11vnc python3-pip

python3 -m pip install --upgrade pip setuptools psutil wheel ```

macOS

First install brew. It should automatically install Command Line Tools. Next install required packages:

```shell brew install git python3 cmake sdl2 sdl2image sdl2ttf sdl2_gfx boost boost-python3

python3 -m pip install --upgrade pip setuptools psutil wheel ```

Windows

Install Git and Python 3. Update pip in the Command Line (here and for the next steps type python instead of python3) commandline python -m pip install --upgrade pip setuptools psutil wheel

2. Install GFootball

Option a. From PyPi package (recommended)

shell python3 -m pip install gfootball

Option b. Installing from sources using GitHub repository

(On Windows you have to install additional tools and set an environment variable, see Compiling Engine for detailed instructions.)

shell git clone https://github.com/google-research/football.git cd football

Optionally you can use virtual environment:

shell python3 -m venv football-env source football-env/bin/activate

Next, build the game engine and install dependencies:

shell python3 -m pip install . This command can run for a couple of minutes, as it compiles the C++ environment in the background. If you face any problems, first check Compiling Engine documentation and search GitHub issues.

3. Time to play!

shell python3 -m gfootball.play_game --action_set=full Make sure to check out the keyboard mappings. To quit the game press Ctrl+C in the terminal.

Contents

Training agents to play GRF

Run training

In order to run TF training, you need to install additional dependencies

  • Update PIP, so that tensorflow 1.15 is available: python3 -m pip install --upgrade pip setuptools wheel
  • TensorFlow: python3 -m pip install tensorflow==1.15.* or python3 -m pip install tensorflow-gpu==1.15.*, depending on whether you want CPU or GPU version;
  • Sonnet and psutil: python3 -m pip install dm-sonnet==1.* psutil;
  • OpenAI Baselines: python3 -m pip install git+https://github.com/openai/baselines.git@master.

Then:

  • To run example PPO experiment on academy_empty_goal scenario, run python3 -m gfootball.examples.run_ppo2 --level=academy_empty_goal_close
  • To run on academy_pass_and_shoot_with_keeper scenario, run python3 -m gfootball.examples.run_ppo2 --level=academy_pass_and_shoot_with_keeper

In order to train with nice replays being saved, run python3 -m gfootball.examples.run_ppo2 --dump_full_episodes=True --render=True

In order to reproduce PPO results from the paper, please refer to:

  • gfootball/examples/reprocheckpointeasy.sh
  • gfootball/examples/reproscoringeasy.sh

Playing the game

Please note that playing the game is implemented through an environment, so human-controlled players use the same interface as the agents. One important implication is that there is a single action per 100 ms reported to the environment, which might cause a lag effect when playing.

Keyboard mappings

The game defines following keyboard mapping (for the keyboard player type):

  • ARROW UP - run to the top.
  • ARROW DOWN - run to the bottom.
  • ARROW LEFT - run to the left.
  • ARROW RIGHT - run to the right.
  • S - short pass in the attack mode, pressure in the defense mode.
  • A - high pass in the attack mode, sliding in the defense mode.
  • D - shot in the attack mode, team pressure in the defense mode.
  • W - long pass in the attack mode, goalkeeper pressure in the defense mode.
  • Q - switch the active player in the defense mode.
  • C - dribble in the attack mode.
  • E - sprint.

Play vs built-in AI

Run python3 -m gfootball.play_game --action_set=full. By default, it starts the base scenario and the left player is controlled by the keyboard. Different types of players are supported (gamepad, external bots, agents...). For possible options run python3 -m gfootball.play_game -helpfull.

Play vs pre-trained agent

In particular, one can play against agent trained with run_ppo2 script with the following command (notice no actionset flag, as PPO agent uses default action set): `python3 -m gfootball.playgame --players "keyboard:leftplayers=1;ppo2cnn:rightplayers=1,checkpoint=$YOURPATH"`

Trained checkpoints

We provide trained PPO checkpoints for the following scenarios:

In order to see the checkpoints playing, run python3 -m gfootball.play_game --players "ppo2_cnn:left_players=1,policy=gfootball_impala_cnn,checkpoint=$CHECKPOINT" --level=$LEVEL, where $CHECKPOINT is the path to downloaded checkpoint. Please note that the checkpoints were trained with Tensorflow 1.15 version. Using different Tensorflow version may result in errors. The easiest way to run these checkpoints is through provided Dockerfile_examples image. See running in docker for details (just override the default Docker definition with -f Dockerfile_examples parameter).

In order to train against a checkpoint, you can pass 'extraplayers' argument to createenvironment function. For example extraplayers='ppo2cnn:rightplayers=1,policy=gfootballimpala_cnn,checkpoint=$CHECKPOINT'.

Owner

  • Name: Google Research
  • Login: google-research
  • Kind: organization
  • Location: Earth

GitHub Events

Total
  • Issues event: 16
  • Watch event: 168
  • Issue comment event: 25
  • Push event: 1
  • Pull request review event: 1
  • Fork event: 67
Last Year
  • Issues event: 16
  • Watch event: 167
  • Issue comment event: 25
  • Push event: 1
  • Pull request review event: 1
  • Fork event: 67

Committers

Last synced: 6 months ago

All Time
  • Total Commits: 78
  • Total Committers: 24
  • Avg Commits per committer: 3.25
  • Development Distribution Score (DDS): 0.59
Past Year
  • Commits: 1
  • Committers: 1
  • Avg Commits per committer: 1.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
Piotr Stanczyk s****k@g****m 32
Victor Khaustov 3****r@u****m 15
Google Research Football Team n****y@g****m 5
Anton Raichuk r****n@g****m 3
Konrad Staniszewski C****d@u****m 2
Pratik Raj R****1@g****m 2
Witalis Domitrz w****z@g****m 2
AntonRaichuk 4****k@u****m 1
Avik Jain a****k@a****e 1
Ayoub Kachkach A****h@g****m 1
DCorneal 4****l@u****m 1
Frank Budrowski 3****i@u****m 1
Joy Banerjee 3****8@u****m 1
Karol Kurach k****h@g****m 1
Sahil Rajput s****6@g****m 1
Sean McGuire m****5@s****u 1
Thomas Tumiel t****l@g****m 1
Yoanis Gil Delgado y****l@u****m 1
arfy slowy s****y@g****m 1
boscotsang b****3@g****m 1
dongr0510 5****0@u****m 1
keshavgbpecdelhi 5****i@u****m 1
pkozakowski 5****i@u****m 1
qstanczyk 5****k@u****m 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 7 months ago

All Time
  • Total issues: 121
  • Total pull requests: 23
  • Average time to close issues: 6 months
  • Average time to close pull requests: 15 days
  • Total issue authors: 107
  • Total pull request authors: 12
  • Average comments per issue: 2.4
  • Average comments per pull request: 1.48
  • Merged pull requests: 11
  • Bot issues: 0
  • Bot pull requests: 1
Past Year
  • Issues: 12
  • Pull requests: 0
  • Average time to close issues: about 2 months
  • Average time to close pull requests: N/A
  • Issue authors: 12
  • Pull request authors: 0
  • Average comments per issue: 1.0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • jecompton (3)
  • Chen001117 (3)
  • fighting-zz (3)
  • CarlossShi (2)
  • chrisyrniu (2)
  • fspinola (2)
  • artinisenaj2014 (2)
  • opocaj92 (2)
  • OrilinZ (2)
  • maximfedorchak1 (2)
  • kakuhin1984 (2)
  • tianyma (1)
  • huangshiyu13 (1)
  • SakuragiJump (1)
  • 48223050 (1)
Pull Request Authors
  • vi3itor (10)
  • dennisagb (2)
  • agurodriguez (2)
  • ymetz (1)
  • Letschi6 (1)
  • rahimrahimovv (1)
  • MatthiasPrall (1)
  • dependabot[bot] (1)
  • orkhanjamalov1991 (1)
  • slowy07 (1)
  • ikertejero (1)
  • cyfra (1)
Top Labels
Issue Labels
Pull Request Labels
cla: yes (9) cla: no (3) dependencies (1)

Packages

  • Total packages: 2
  • Total downloads:
    • pypi 753 last-month
  • Total docker downloads: 36
  • Total dependent packages: 1
    (may contain duplicates)
  • Total dependent repositories: 57
    (may contain duplicates)
  • Total versions: 30
  • Total maintainers: 2
pypi.org: gfootball

Google Research Football - RL environment based on open-source game Gameplay Football

  • Versions: 27
  • Dependent Packages: 1
  • Dependent Repositories: 57
  • Downloads: 753 Last month
  • Docker Downloads: 36
Rankings
Forks count: 1.2%
Stargazers count: 1.4%
Dependent repos count: 2.0%
Average: 3.8%
Docker downloads count: 3.9%
Dependent packages count: 4.7%
Downloads: 9.6%
Maintainers (2)
Last synced: 6 months ago
proxy.golang.org: github.com/google-research/football
  • Versions: 3
  • Dependent Packages: 0
  • Dependent Repositories: 0
Rankings
Dependent packages count: 6.5%
Average: 6.7%
Dependent repos count: 6.9%
Last synced: 6 months ago