https://github.com/cair/deep-warehouse

A Simulator for complex logistic environments

https://github.com/cair/deep-warehouse

Science Score: 26.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
  • DOI references
    Found 1 DOI reference(s) in README
  • Academic publication links
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (11.3%) to scientific vocabulary

Keywords

ai artificial-intelligence cpp deep-learning deep-reinforcement-learning game machine-learning neural-networks per-arne python reinforcement-learning reinforcement-learning-environments tree-search
Last synced: 5 months ago · JSON representation

Repository

A Simulator for complex logistic environments

Basic Info
  • Host: GitHub
  • Owner: cair
  • Language: Python
  • Default Branch: master
  • Homepage:
  • Size: 437 KB
Statistics
  • Stars: 9
  • Watchers: 2
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Topics
ai artificial-intelligence cpp deep-learning deep-reinforcement-learning game machine-learning neural-networks per-arne python reinforcement-learning reinforcement-learning-environments tree-search
Created about 7 years ago · Last pushed about 4 years ago
Metadata Files
Readme

docs/README.md

Deep Warehouse is a free-to-use software implementation of warehouse structures. The aim of this project is to provide a highly efficient simulation for creating learning agents towards mastering warehouse logistics with autonomous agents.

Installation

Currently, the project must be cloned (git clone git@github.com:cair/deep-warehouse.git)

Usage

```python from deeplogistics import DeepLogistics from deeplogistics import SpawnStrategies from deep_logistics.agent import Agent, ManhattanAgent

if name == "main":

env = DeepLogistics(width=30,
                    height=30,
                    depth=3,
                    taxi_n=1,
                    ups=5000,
                    graphics_render=True,
                    delivery_locations=[
                        (5, 5),
                        (15, 15),
                        (20, 20),
                        (10, 10),
                        (5, 10)
                    ],
                    spawn_strategy=SpawnStrategies.RandomSpawnStrategy
                    )

"""Parameters"""
EPISODES = 1000
EPISODE_MAX_STEPS = 100

"""Add agents"""
env.agents.add_agent(ManhattanAgent, n=20)

for episode in range(EPISODES):
    env.reset()

    terminal = False
    steps = 0

    while terminal is False:
        env.update()
        env.render()

        terminal = env.is_terminal()
        steps += 1

        if terminal:
            print("Episode %s, Steps: %s" % (episode, steps))
            break

    """Add a new agent. (Harder) """
    #env.agents.add_agent(ManhattanAgent)

```

How to cite this work

bibtex @InProceedings{10.1007/978-3-030-34885-4_3, author="Andersen, Per-Arne and Goodwin, Morten and Granmo, Ole-Christoffer", editor="Bramer, Max and Petridis, Miltos", title="Towards Model-Based Reinforcement Learning for Industry-Near Environments", booktitle="Artificial Intelligence XXXVI", year="2019", publisher="Springer International Publishing", address="Cham", pages="36--49", abstract="Deep reinforcement learning has over the past few years shown great potential in learning near-optimal control in complex simulated environments with little visible information. Rainbow (Q-Learning) and PPO (Policy Optimisation) have shown outstanding performance in a variety of tasks, including Atari 2600, MuJoCo, and Roboschool test suite. Although these algorithms are fundamentally different, both suffer from high variance, low sample efficiency, and hyperparameter sensitivity that, in practice, make these algorithms a no-go for critical operations in the industry.", isbn="978-3-030-34885-4" }

Licence

Copyright 2022 Per-Arne Andersen

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Owner

  • Name: Centre for Artificial Intelligence Research (CAIR)
  • Login: cair
  • Kind: organization
  • Email: cair-internal@uia.no
  • Location: Grimstad, Norway

CAIR is a centre for research excellence on artificial intelligence at the University of Agder. We attack unsolved problems, seeking superintelligence.

GitHub Events

Total
  • Watch event: 1
Last Year
  • Watch event: 1

Committers

Last synced: 7 months ago

All Time
  • Total Commits: 118
  • Total Committers: 1
  • Avg Commits per committer: 118.0
  • Development Distribution Score (DDS): 0.0
Past Year
  • Commits: 0
  • Committers: 0
  • Avg Commits per committer: 0.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
Per-Arne Andersen p****r@s****o 118
Committer Domains (Top 20 + Academic)
sysx.no: 1

Issues and Pull Requests

Last synced: 7 months ago

All Time
  • Total issues: 0
  • Total pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Total issue authors: 0
  • Total pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels