racecar_gym
NTUST 2023 Reinforcement Learning in Human-Computer Intercation course-competition#1. [A gym environment for a miniature racecar using the pybullet physics engine.]
Science Score: 44.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (11.2%) to scientific vocabulary
Repository
NTUST 2023 Reinforcement Learning in Human-Computer Intercation course-competition#1. [A gym environment for a miniature racecar using the pybullet physics engine.]
Basic Info
Statistics
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
- Releases: 1
Metadata Files
README.md
Racecar Gym

A gym environment for a miniature, F1Tenth-like racecar using the bullet physics engine with pybullet.
Introduction
This repository is for NTUST 2023 Reinforcement Learning in Human-Computer Interaction course - competition #1.\ This environment is originally from here.
Prerequisites
You can install racecar_gym with the following commands:
shell_script
git clone https://github.com/MingCongSu/racecar_gym.git
cd racecar_gym
pip install -e .
Download Maps (Tracks)
Here is how you can do this from the command line: ```shell_script cd ./models/scenes
For Linux (including WSL)
wget https://github.com/MingCongSu/racecargym/releases/download/trainingtracks-v1/trainingtracks.zip unzip trainingtracks.zip
For Windows
wget -O trainingtracks.zip https://github.com/MingCongSu/racecargym/releases/download/trainingtracks-v1/trainingtracks.zip
Expand-Archive -Path .\trainingtracks.zip -DestinationPath ./
``
After installation, go back toracecargymfolder and runtestenv.pyto test the environment:
``shellscript
go back to racecar_gym folder
python testenv.py
``
There should beracecartest_env.mp4undervideos` folder
Environments
The observation and action space is a Dict holding the agents and their id's. The observation and action space for a single agent
is also a Dict, which is described in more detail below. In general, observations are obtained through sensors and commands
are executed by actuators. Vehicles can have multiple sensors and actuators. Those are described in the vehicle configuration
(e.g. differential racecar). Agents, which consist of a vehicle and an assigned task,
are specified in the scenario file (e.g. austria.yml). In this file, agents are described by the
sensors to use (note that they must be available in the vehicle configuration) and the corresponding task. Have a look at
tasks to see all available tasks.
Example:
yaml
world:
name: austria
agents:
- id: A
vehicle:
name: racecar
sensors: [lidar, pose, velocity, acceleration]
actuators: [motor, steering]
color: blue # default is blue, one of red, green, blue, yellow, magenta or random
task:
task_name: maximize_progress
params: {laps: 1, time_limit: 120.0, terminate_on_collision: False}
This example specifies a scenario on the Austria track. One agent with id A is specified. The agent controls the differential drive racecar defined in differential racecar, identified by its name. The scenario tells the agent to use only the specified sensors (lidar, pose, velocity, acceleration). Optionally, one can also specify a color for the car. The default color is blue. Available colors are listed above.
The task which is assigned to this agent is also identified by name (implementations can be found in tasks). Task parameters are passed by the dict params.
Observations
Observations are obtained by (possibly noisy) sensors. Parameters for the sensors as well as the level of noise, can be configured in the corresponding vehicle configuration (e.g. differential racecar). In the scenario specification, one can specify which of the available sensors should be actually used. The observation space is a dictionary where the names of the sensors are the keys which map to the actual measurements. Currently, five sensors are implemented: pose, velocity, acceleration, LiDAR and RGB Camera. Further, the observation space also includes the current simulation time.
|Key|Space|Defaults|Description|
|---|---|---|---|
|pose|Box(6,)| |Holds the position (x, y, z) and the orientation (roll, pitch, yaw) in that order.|
|velocity|Box(6,)| |Holds the x, y and z components of the translational and rotational velocity.|
|acceleration|Box(6,)| |Holds the x, y and z components of the translational and rotational acceleration.|
|lidar|Box(<scans>,)|scans: 1080|Lidar range scans.|
|rgb_camera|Box(<height>, <width>, 3)|height: 240, width: 320|RGB image of the front camera.|
Actions
The action space for a single agent is a defined by the actuators of the vehicle.
By default, differential racecar defines two actuators: motor and steering.
The action space is therefore a dictionary with keys motor and steering.
Alternatevely, the agent can control the target speed and steering, but must be defined in the scenario specification.
In this case, the action space is a dictionary with keys speed and steering.
Note, that the action space of the car is normalized between -1 and 1. The action space can include the following actuators:
| Key |Space| Description |
|----------|---|-----------------------------------------------------------------------------|
| motor |Box(low=-1, high=1, shape=(1,))| Throttle command. If negative, the car accelerates backwards. |
| speed |Box(low=-1, high=1, shape=(1,))| Normalized target speed. |
| steering |Box(low=-1, high=1, shape=(1,))| Normalized steering angle. |
State
In addition to observations obtained by sensors, the environment passes back the true state of each vehicle in each step (the state is returned as the info dictionary). The state is a dictionary, where the keys are the ids of all agents. Currently, the state looks like this:
|Key|Type|Description|
|---|---|---|
|wallcollision|bool|True if the vehicle collided with the wall.|
|opponentcollisions|List[str]|List of opponent id's which are involved in a collision with the agent.|
|pose|NDArray[6]|Ground truth pose of the vehicle (x, y, z, roll, pitch, yaw).|
|acceleration|NDArray[6]|Ground truth acceleration of the vehicle (x, y, z, roll, pitch, yaw).|
|velocity|NDArray[6]|Ground truth velocity of the vehicle (x, y, z, roll, pitch, yaw).|
|progress|float|Current progress in this lap. Interval: [0, 1]|
|time|float|Simulation time.|
|checkpoint|int|Tracks are subdivided into checkpoints to make sure agents are racing in clockwise direction. Starts at 0.|
|lap|int|Current lap.|
|rank|int|Current rank of the agent, based on lap and progress.|
|wrong_way|bool|Indicates wether the agent goes in the right or wrong direction.|
|observations|Dict|The most recent observations of the agent.
Available API's
Gym API
To use the Gym API you can either instantiate environments with the standard keys or by loading custom scenarios.
In either case, you have to load the gym_api module from this package:
```python
import gymnasium
import racecargym.envs.gymapi
For predefined environments:
env = gymnasium.make( id='SingleAgentAustria-v0', render_mode='human' )
For custom scenarios:
env = gymnasium.make( id='SingleAgentRaceEnv-v0', scenario='path/to/scenario', rendermode='rgbarrayfollow', # optional: 'rgbarraybirdseye' render_options=dict(width=320, height=240, agent='A') # optional )
done = False resetoptions = dict(mode='grid') obs, info = env.reset(options=resetoptions)
while not done: action = env.action_space.sample() obs, rewards, terminated, truncated, states = env.step(action) done = terminated or truncated
env.close()
The predefined env-strings are of the form
Maps
Currently available maps are listed below. The gridmaps are originally from the F1Tenth repositories.
| Image | Name |
|---------------------------------------|----------|
|
| Austria | |
|
| Circle |
Notes
Please note that this is work in progress, and interfaces might change. Also more detailed documentation and additional scenarios will follow.
Owner
- Login: MingCongSu
- Kind: user
- Repositories: 1
- Profile: https://github.com/MingCongSu
Citation (CITATION.cff)
cff-version: 1.2.0 message: "If you use this software, please cite it as below." authors: - family-names: "Brunnbauer" given-names: "Axel" orcid: "https://orcid.org/0000-0002-8934-7355" affiliation: TU Wien - family-names: "Berducci" given-names: "Luigi" orcid: "https://orcid.org/0000-0002-3497-6007" affiliation: TU Wien title: "racecar_gym" version: 0.0.1 url: "https://github.com/axelbr/racecar_gym"