reinforcement-learning-for-scripts-of-tribute
This is the corresponding Github repository of our paper "Training a Reinforcement Learning Agent for Tales of Tribute".
https://github.com/adockhorn/reinforcement-learning-for-scripts-of-tribute
Science Score: 44.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (11.1%) to scientific vocabulary
Repository
This is the corresponding Github repository of our paper "Training a Reinforcement Learning Agent for Tales of Tribute".
Basic Info
- Host: GitHub
- Owner: ADockhorn
- License: mit
- Language: C#
- Default Branch: main
- Size: 250 KB
Statistics
- Stars: 0
- Watchers: 0
- Forks: 0
- Open Issues: 0
- Releases: 0
Metadata Files
README.md
Reinforcement Learning for Scripts of Tribute (SoT)
This is the corresponding Github repository of our paper Training a Reinforcement Learning Agent for Tales of Tribute. We hereby provide our implementation of a reinforcement learning routine for the Tales of Tribute card game simulator. For simplicity we combined our ontributions with the content of the original Scripts of Tribute repository at the time of the upload. For this to work, we needed to slightly modify the original framework (see the section on Initial setup and modifications).
Reference to the Corresponding Paper
bibtex
@inproceedings { Lashmet25,
author = {Sebastian Lashmet and Alexander Dockhorn},
title = {Training a Reinforcement Learning Agent for Tales of Tribute},
booktitle = {2025 IEEE Conference on Games (CoG)},
year = {2025},
pages = {1-4}
}
Included Files
Our contributions can be found in Bots/ExternalLanguageBotsUtils/Python and includes of: - RLTraining/sotrlenvironment.py maps Scripts of Tribute into an gymnasium RL environment. For this to work, we open a subprocess that loads two agents, the first being the rlbridge.py and the second one being an opponent player. In each step of the environment, the rlbridge sends the current state and available actions to the sotrlenvironment via a socket connection. The data is provided to a reinformcent learning agent, here PPO, to decide about the action. Once an action has been chosen it is sent to the rlbridge which returns it to the Scripts of Tribute agent. - **rlbridge.py** serves as a communication bridge between "Scripts of Tribute" and a reinforcement learning environment. It forwards data from the environment to an arbitrary agent, here, a PPO reinforcement learning
The main workflow can be described by: - sotrlenvironment opens up a subprocess of Scripts of Tribute using the rlbridge as a python agent - rlbridge connects with the sotrlenvironment and sends information of the state and reward to the sotrlenvironment - the sotrlenvironment provides the state as an observation to any RL agent - the RL agent decides about the action and returns it to the sotrlenvironment - the sotrlenvironment sends the action to the rlbridge - rlbridge sends the action to Tales of Tribute
Further files relevant to our work: - mapactionto_vector.py: maps the current action to a vector of characteristics that will be multiplied with the learned action preference vector - mapgamestateto_vector.py: maps the current gamestate to a vector to be processed by our RL agent - evilclonebot: implementation of a simplified self-playing strategy in which we keep an archive of previous agents to randomly play against - rl_gg.py: another version of a simplified self-playing strategy in which we keep an archive of previous agents to randomly play against - respective older versions of these scripts representing our work process
Note: some files may still include a "todo:" note. We have added those to the public repository, because the original code included personalized filenames. For each code section, we added a reference which files need to be linked.
Initial Setup of the Tales of Tribute framework and Modifications
- Download the Tales of Tribute repository and extract it to some folder, from now on called "root"
- Install Visual Studion 2022
- Install .Net Package
- Open the root folder and its contained TalesOfTribute.sln file
- Build Solution inside Visual Studio and test GameRunner
- The project will be built in several subfolders. The most relevant for us will be: "GameRunner/bin/Release/net7.0"
- Open a terminal and navigate to the "GameRunner/bin/Release/net7.0" folder
- from here, we can open the python example bot using the following command './GameRunner "cmd:python ../../../../Bots/ExternalLanguageBotsUtils/Python/example-bot.py" "cmd:python ../../../../Bots/ExternalLanguageBotsUtils/Python/example-bot.py" -n 10 -t 1'
- The game will start and the bots will play against each other n times within t threads
- more threads may improve performance, but I wouldn't use it for your RL-based bot to avoid conflicts
in ExternalAIAdapapter.cs update the following method accordingly: ```c# public override Move Play(GameState gameState, List
possibleMoves, TimeSpan remainingTime) { var obj = gameState.SerializeGameState(); sw.WriteLine("{ \"State\":"); sw.WriteLine(obj.ToString()); sw.WriteLine(", \"Actions\": ["); //var obj2 = possibleMoves.Select(m => m.Command.ToString()).ToList(); sw.WriteLine(string.Join(',', possibleMoves.Select(m => "\"" + m.ToString() + "\"").ToList())); sw.WriteLine("]}"); sw.WriteLine(EOT); string botOutput; botOutput = sr.ReadLine(); // Console.WriteLine(string.Join(",", possibleMoves.Select(m => m.ToString()).ToList())); // Console.WriteLine($"Bot response: {botOutput}"); return possibleMoves[int.Parse(botOutput)]; //return MapStringToMove(botOutput, gameState, possibleMoves);} ```
Note: this breaks compatibility with the example-bot.py, but it is necessary to make the bots work with later to be implemented RL-based bots and we are going to fix this in a few seconds
Owner
- Name: Alexander Dockhorn
- Login: ADockhorn
- Kind: user
- Repositories: 32
- Profile: https://github.com/ADockhorn
Citation (CITATION.cff)
# This CITATION.cff file was generated with cffinit.
# Visit https://bit.ly/cffinit to generate yours today!
cff-version: 1.2.0
title: >-
Training a Reinforcement Learning Agent for Tales of
Tribute
message: 'If you use this software, please cite it as below.'
type: software
authors:
- given-names: Sebastian
family-names: Lashmet
- given-names: Alexander
family-names: Dockhorn
repository-code: >-
https://github.com/ADockhorn/Reinforcement-Learning-for-Scripts-of-Tribute
abstract: >-
Tales of Tribute is a strategic deck-building card game
that presents unique challenges for artificial
intelligence due to its complex action space, hidden
information, and long-term planning requirements. In this
work, we propose a reinforcement learning agent that
learns to play Tales of Tribute without relying on
handcrafted heuristics. Our approach introduces a
generalizable game state representation and a scalable
action evaluation mechanism based on preference vectors.
We train our agent using Proximal Policy Optimization
within the Scripts of Tribute framework and demonstrate
competitive performance against established search-based
agents, including Monte Carlo Tree Search. The results
validate the viability of reinforcement learning in Tales
of Tribute and highlight the potential of preference-based
action evaluation in domains with large and variable
action spaces. Further experiments will be required to
test its viability in other card games.
license: MIT
version: '1.0'
date-released: '2025-07-09'
preferred-citation:
type: conference-paper
title: Training a Reinforcement Learning Agent for Tales of Tribute
authors:
- family-names: "Lashmet"
given-names: "Sebastian"
- family-names: "Dockhorn"
given-names: "Alexander"
collection-title: "2025 IEEE Conference on Games (CoG)"
pages: 1-4
year: 2025
GitHub Events
Total
- Push event: 5
- Create event: 2
Last Year
- Push event: 5
- Create event: 2
Dependencies
- ubuntu 22.04 build
- Microsoft.NET.Test.Sdk 17.3.2
- coverlet.collector 3.1.2
- xunit 2.4.2
- xunit.runner.visualstudio 2.4.5
- Newtonsoft.Json 13.0.2
- System.CommandLine 2.0.0-beta4.22272.1
- Microsoft.NET.Test.Sdk 17.3.2
- coverlet.collector 3.1.2
- xunit 2.4.2
- xunit.runner.visualstudio 2.4.5
- Microsoft.NET.Test.Sdk 17.1.0
- Moq 4.18.3
- coverlet.collector 3.1.2
- xunit 2.4.1
- xunit.runner.visualstudio 2.4.3