pyslam
pySLAM is a Python-based Visual SLAM pipeline that supports monocular, stereo, and RGB-D cameras. It offers a wide range of modern local and global features, multiple loop-closing strategies, a volumetric reconstruction pipeline, integration of depth prediction models, and semantic segmentation for enhanced scene understanding.
Science Score: 49.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 8 DOI reference(s) in README -
✓Academic publication links
Links to: arxiv.org, researchgate.net, springer.com, ieee.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (12.1%) to scientific vocabulary
Repository
pySLAM is a Python-based Visual SLAM pipeline that supports monocular, stereo, and RGB-D cameras. It offers a wide range of modern local and global features, multiple loop-closing strategies, a volumetric reconstruction pipeline, integration of depth prediction models, and semantic segmentation for enhanced scene understanding.
Basic Info
Statistics
- Stars: 2,626
- Watchers: 45
- Forks: 432
- Open Issues: 1
- Releases: 0
Metadata Files
README.md

pySLAM v2.8.10
Author: Luigi Freda
pySLAM is a python implementation of a Visual SLAM pipeline that supports monocular, stereo and RGBD cameras. It provides the following features in a single python environment:
- A wide range of classical and modern local features with a convenient interface for their integration.
- Multiple loop closing methods, including descriptor aggregators such as visual Bag of Words (BoW, iBow), Vector of Locally Aggregated Descriptors (VLAD) and modern global descriptors (image-wise descriptors).
- A volumetric reconstruction pipeline that processes depth and color images using volumetric integration to produce dense reconstructions. It supports TSDF with voxel hashing and incremental Gaussian Splatting.
- Integration of depth prediction models within the SLAM pipeline. These include DepthPro, DepthAnythingV2, RAFT-Stereo, CREStereo, etc.
- A suite of segmentation models for semantic understanding of the scene, such as DeepLabv3, Segformer, and dense CLIP.
- Additional tools for VO (Visual Odometry) and SLAM, with built-in support for both g2o and GTSAM, along with custom Python bindings for features not available in the original libraries.
- Built-in support for over 10 dataset types.
pySLAM serves as a flexible baseline framework to experiment with VO/SLAM techniques, local features, descriptor aggregators, global descriptors, volumetric integration, depth prediction and semantic mapping. It allows to explore, prototype and develop VO/SLAM pipelines. pySLAM is a research framework and a work in progress. It is not optimized for real-time performance.
Enjoy it!
Table of contents
- pySLAM v2.8.10
- Table of contents
- Overview
- Main Scripts
- System overview
- Install
- Main requirements
- Ubuntu
- MacOS
- Docker
- How to install non-free OpenCV modules
- Troubleshooting and performance issues
- Usage
- Visual odometry
- Full SLAM
- Selecting a dataset and different configuration parameters
- Feature tracking
- Loop closing and relocalization
- Volumetric reconstruction
- Depth prediction
- Semantic mapping
- Saving and reloading
- Graph optimization engines
- SLAM GUI
- Monitor the logs of tracking, local mapping, loop closing and volumetric mapping simultaneously
- Evaluating SLAM
- Supported components and models
- Supported local features
- Supported matchers
- Supported global descriptors and local descriptor aggregation methods
- Supported depth prediction models
- Supported volumetric mapping methods
- Supported semantic segmentation methods
- Configuration
- Main configuration file
- Datasets
- Camera Settings
- References
- Credits
- License
- Contributing to pySLAM
- Roadmap
Overview
bash
cpp # Pybind11 C++ bindings
data # Sample input/output data
docs # Documentation files
pyslam # Core Python package
dense
depth_estimation
evaluation
io
local_features
loop_closing
semantics
slam
utilities
viz
scripts # Shell utility scripts
settings # Dataset/configuration files
test # Tests and usage examples
thirdparty # External dependencies
Main Scripts
main_vo.pycombines the simplest VO ingredients without performing any image point triangulation or windowed bundle adjustment. At each step $k$,main_vo.pyestimates the current camera pose $Ck$ with respect to the previous one $C{k-1}$. The inter-frame pose estimation returns $[R{k-1,k},t{k-1,k}]$ with $\Vert t{k-1,k} \Vert=1$. With this very basic approach, you need to use a ground truth in order to recover a correct inter-frame scale $s$ and estimate a valid trajectory by composing $Ck = C{k-1} [R{k-1,k}, s t_{k-1,k}]$. This script is a first start to understand the basics of inter-frame feature tracking and camera pose estimation.main_slam.pyadds feature tracking along multiple frames, point triangulation, keyframe management, bundle adjustment, loop closing, dense mapping and depth inference in order to estimate the camera trajectory and build both a sparse and dense map. It's a full SLAM pipeline and includes all the basic and advanced blocks which are necessary to develop a real visual SLAM pipeline.main_feature_matching.pyshows how to use the basic feature tracker capabilities (feature detector + feature descriptor + feature matcher) and allows to test the different available local features.main_depth_prediction.pyshows how to use the available depth inference models to get depth estimations from input color images.main_map_viewer.pyreloads a saved map and visualizes it. Further details on how to save a map here.main_map_dense_reconstruction.pyreloads a saved map and uses a configured volumetric integrator to obtain a dense reconstruction (see here).main_slam_evaluation.pyenables automated SLAM evaluation by executingmain_slam.pyacross a collection of datasets and configuration presets (see here).
Other test/example scripts are provided in the test folder.
System overview
This page provides a high-level system overview, including diagrams that illustrate the main workflow, key components, and class relationships or dependencies.
paper: "pySLAM: An Open-Source, Modular, and Extensible Framework for SLAM", Luigi Freda
You may find an updated version of the paper here.
presentation: "pySLAM and slamplay: Modular, Extensible SLAM Tools for Rapid Prototyping and Integration", Luigi Freda
RSS 2025 Workshop: Unifying Visual SLAM. The recorded talk is available here.
Install
First, clone this repo and its submodules by running
bash
git clone --recursive https://github.com/luigifreda/pyslam.git
cd pyslam
Then, under Ubuntu and MacOs you can simply run: ```bash
pixi shell # If you want to use pixi, this is the first step that prepares the installation.
./install_all.sh
```
This install scripts creates a single python environment pyslam that hosts all the supported components and models. If conda is available, it automatically uses it, otherwise it installs and uses venv. An internet connection is required.
Refer to these links for further details about the specific install procedures that are supported.
- Ubuntu =>
- MacOs =>
- Windows =>
- Docker =>
Once everything is completed you can jump the usage section.
Main requirements
- Python 3.11.9
- OpenCV >=4.10 (see below)
- PyTorch >=2.3.1
- Tensorflow >=2.13.1
- Kornia >=0.7.3
- Rerun
- You need CUDA in order to run Gaussian splatting and dust3r-based methods. Check you have installed a suitable version of cuda toolkit by running
./cuda_config.sh
The internal pySLAM libraries are imported by using a Config instance (from pyslam/config.py) in the main or test scripts. If you encounter any issues or performance problems, please refer to the TROUBLESHOOTING file for assistance.
Ubuntu
- With venv: Follow the instructions reported here. The procedure has been tested on Ubuntu 18.04, 20.04, 22.04 and 24.04.
- With conda: Run the procedure described in this other file.
- With pixi: Run
pixi shellin the root folder of the repo before launching./install_all.sh(see this file for further details).
The procedures will create a new virtual environment pyslam.
MacOS
Follow the instructions in this file. The reported procedure was tested under Sequoia 15.1.1 and Xcode 16.1.
Docker
If you prefer docker or you have an OS that is not supported yet, you can use rosdocker:
- With its custom pyslam / pyslam_cuda docker files (follow the instructions here).
- With one of the suggested docker images (ubuntu*_cuda or ubuntu*), where you can clone, build and run pyslam.
How to install non-free OpenCV modules
The provided install scripts take care of installing a recent opencv version (>=4.10) with non-free modules enabled (see scripts/installopencvpython.sh). To quickly verify your installed opencv version run: ```bash
pixi shell # If you use pixi, this activates the pyslam environment.
. pyenv-activate.sh # Activate pyslam python environment. Only needed once in a new terminal. Not needed with pixi.
./scripts/opencvcheck.py
<!-- Otherwise, run the following commands:
bash
python3 -c "import cv2; print(cv2.version)" # check opencv version
python3 -c "import cv2; detector = cv2.xfeatures2d.SURFcreate()" # check if you have non-free OpenCV module support (no errors imply success)
``` -->
Troubleshooting and performance issues
If you run into issues or errors during the installation process or at run-time, please, check the docs/TROUBLESHOOTING.md file. Before submitting a new git issue please read here.
Usage
Open a new terminal and start experimenting with the scripts. In each new terminal, you are supposed to start with this command: ```bash
pixi shell # If you use pixi, this activates the pyslam environment.
. pyenv-activate.sh # Activate pyslam python environment. Only needed once in a new terminal. Not needed with pixi.
``
If you are usingpixithen just runpixi shellto activate thepyslam` environment.
The file config.yaml serves as a single entry point to configure the system and its global configuration parameters contained in pyslam/config_parameters.py. Further information on how to configure pySLAM are provided here.
Visual odometry
The basic Visual Odometry (VO) can be run with the following commands: ```bash
pixi shell # If you use pixi, this activates the pyslam environment.
. pyenv-activate.sh # Activate pyslam python environment. Only needed once in a new terminal. Not needed with pixi.
./mainvo.py
``
By default, the script processes a [KITTI](http://www.cvlibs.net/datasets/kitti/eval_odometry.php) video (available in the folderdata/videos) by using its corresponding camera calibration file (available in the foldersettings), and its groundtruth (available in the samedata/videosfolder). If matplotlib windows are used, you can stopmainvo.pyby clicking on one of them and pressing the key 'Q'. As explained above, this very *basic* scriptmain_vo.py**strictly requires a ground truth**.
Now, with RGBD datasets, you can also test the **RGBD odometry** with the classesVisualOdometryRgbdorVisualOdometryRgbdTensor` (ground truth is not required here).
Full SLAM
Similarly, you can test the full SLAM by running main_slam.py:
```bash
pixi shell # If you use pixi, this activates the pyslam environment.
. pyenv-activate.sh # Activate pyslam python environment. Only needed once in a new terminal. Not needed with pixi.
./main_slam.py
```
This will process the same default KITTI video (available in the folder data/videos) by using its corresponding camera calibration file (available in the folder settings). You can stop it by clicking on one of the opened windows and pressing the key 'Q' or closing the 3D pangolin GUI.
Selecting a dataset and different configuration parameters
The file config.yaml serves as a single entry point to configure the system, the target dataset and its global configuration parameters set in pyslam/config_parameters.py.
To process a different dataset with both VO and SLAM scripts, you need to update the file config.yaml:
* Select your dataset type in the section DATASET (further details in the section Datasets below for further details). This identifies a corresponding dataset section (e.g. KITTI_DATASET, TUM_DATASET, etc).
* Select the sensor_type (mono, stereo, rgbd) in the chosen dataset section.
* Select the camera settings file in the dataset section (further details in the section Camera Settings below).
* Set the groudtruth_file accordingly. Further details in the section Datasets below (see also the files io/ground_truth.py, io/convert_groundtruth_to_simple.py).
You can use the section GLOBAL_PARAMETERS of the file config.yaml to override the global configuration parameters set in pyslam/config_parameters.py. This is particularly useful when running a SLAM evaluation.
Feature tracking
If you just want to test the basic feature tracking capabilities (feature detector + feature descriptor + feature matcher) and get a taste of the different available local features, run ```bash
pixi shell # If you use pixi, this activates the pyslam environment.
. pyenv-activate.sh # Activate pyslam python environment. Only needed once in a new terminal. Not needed with pixi.
./mainfeaturematching.py
```
In any of the above scripts, you can choose any detector/descriptor among ORB, SIFT, SURF, BRISK, AKAZE, SuperPoint, etc. (see the section Supported Local Features below for further information).
Some basic examples are available in the subfolder test/cv. In particular, as for feature detection/description, you may want to take a look at test/cv/testfeaturemanager.py too.
Loop closing and relocalization
Many loop closing methods are available, combining different aggregation methods and global descriptors.
While running full SLAM, loop closing is enabled by default and can be disabled by setting kUseLoopClosing=False in pyslam/config_parameters.py. Different configuration options LoopDetectorConfigs can be found in pyslam/loopclosing/loopdetector_configs.py: Code comments provide additional useful details.
One can start experimenting with loop closing methods by using the examples in test/loopclosing. The example test/loopclosing/testloopdetector.py is the recommended entry point.
Vocabulary management
DBoW2, DBoW3, and VLAD require pre-trained vocabularies. ORB-based vocabularies are automatically downloaded into the data folder (see pyslam/loopclosing/loopdetector_configs.py).
To create a new vocabulary, follow these steps:
Generate an array of descriptors: Use the script
test/loopclosing/test_gen_des_array_from_imgs.pyto generate the array of descriptors that will be used to train the new vocabulary. Select your desired descriptor type via the tracker configuration.DBOW vocabulary generation: Train your target DBOW vocabulary by using the script
test/loopclosing/test_gen_dbow_voc_from_des_array.py.VLAD vocabulary generation: Train your target VLAD "vocabulary" by using the script
test/loopclosing/test_gen_vlad_voc_from_des_array.py.
Once you have trained the vocabulary, you can add it in pyslam/loopclosing/loopdetector_vocabulary.py and correspondingly create a new loop detector configuration in pyslam/loopclosing/loopdetector_configs.py that uses it.
Vocabulary-free loop closing
Most methods do not require pre-trained vocabularies. Specifically:
- iBoW and OBindex2: These methods incrementally build bags of binary words and, if needed, convert (front-end) non-binary descriptors into binary ones.
- Others: Methods like HDC_DELF, SAD, AlexNet, NetVLAD, CosPlace, EigenPlaces, and Megaloc directly extract their specific global descriptors and process them using dedicated aggregators, independently from the used front-end descriptors.
As mentioned above, only DBoW2, DBoW3, and VLAD require pre-trained vocabularies.
Verify your loop detection configuration and verify vocabulary compability
Loop detection method based on a pre-trained vocabulary
When selecting a loop detection method based on a pre-trained vocabulary (such as DBoW2, DBoW3, and VLAD), ensure the following:
1. The back-end and the front-end are using the same descriptor type (this is also automatically checked for consistency) or their descriptor managers are independent (see further details in the configuration options LoopDetectorConfigs available in pyslam/loopclosing/loopdetector_configs.py).
2. A corresponding pre-trained vocubulary is available. For more details, refer to the vocabulary management section.
Missing vocabulary for the selected front-end descriptor type
If you lack a compatible vocabulary for the selected front-end descriptor type, you can follow one of these options:
1. Create and load the vocabulary (refer to the vocabulary management section).
2. Choose an *_INDEPENDENT loop detector method, which works with an independent localfeaturemanager.
3. Select a vocabulary-free loop closing method.
See the file pyslam/loopclosing/loopdetector_configs.py for further details.
Volumetric reconstruction
Dense reconstruction while running SLAM
The SLAM back-end hosts a volumetric reconstruction pipeline. This is disabled by default. You can enable it by setting kUseVolumetricIntegration=True and selecting your preferred method kVolumetricIntegrationType in pyslam/config_parameters.py. At present, two methods are available: TSDF and GAUSSIAN_SPLATTING (see pyslam/dense/volumetricintegratorfactory.py). Note that you need CUDA in order to run GAUSSIAN_SPLATTING method.
At present, the volumetric reconstruction pipeline works with: - RGBD datasets - When a depth estimator is used * in the back-end with STEREO datasets (you can't use depth prediction in the back-end with MONOCULAR datasets, further details here) * in the front-end (to emulate an RGBD sensor) and a depth prediction/estimation gets available for each processed keyframe.
To obtain a mesh as output, set kVolumetricIntegrationExtractMesh=True in pyslam/config_parameters.py.
Reload a saved sparse map and perform dense reconstruction
Use the script main_map_dense_reconstruction.py to reload a saved sparse map and perform dense reconstruction by using its posed keyframes as input. You can select your preferred dense reconstruction method directly in the script.
- To check what the volumetric integrator is doing, run in another shell
tail -f logs/volumetric_integrator.log(from repository root folder). - To save the obtained dense and sparse maps, press the
Savebutton on the GUI.
Reload and check your dense reconstruction
You can check the output pointcloud/mesh by using CloudCompare.
In the case of a saved Gaussian splatting model, you can visualize it by:
1. Using the superslat editor (drag and drop the saved Gaussian splatting .ply pointcloud in the editor interface).
2. Getting into the folder test/gaussian_splatting and running:
python test_gsm.py --load <gs_checkpoint_path>
The directory <gs_checkpoint_path> is expected to have the following structure:
bash
gs_checkpoint_path
pointcloud # folder containing different subfolders, each one with a saved .ply econding the Gaussian splatting model at a specific iteration/checkpoint
last_camera.json
config.yml
Controlling the spatial distribution of keyframe FOV centers
If you are targeting volumetric reconstruction while running SLAM, you can enable a keyframe generation policy designed to manage the spatial distribution of keyframe field-of-view (FOV) centers. The FOV center of a camera is defined as the backprojection of its image center, calculated using the median depth of the frame. With this policy, a new keyframe is generated only if its FOV center lies beyond a predefined distance from the nearest existing keyframe's FOV center. You can enable this policy by setting the following parameters in the yaml setting:
yaml
KeyFrame.useFovCentersBasedGeneration: 1 # compute 3D fov centers of camera frames by using median depth and use their distances to control keyframe generation
KeyFrame.maxFovCentersDistance: 0.2 # max distance between fov centers in order to generate a keyframe
Depth prediction
The available depth prediction models can be utilized both in the SLAM back-end and front-end.
- Back-end: Depth prediction can be enabled in the volumetric reconstruction pipeline by setting the parameter kVolumetricIntegrationUseDepthEstimator=True and selecting your preferred kVolumetricIntegrationDepthEstimatorType in pyslam/config_parameters.py.
- Front-end: Depth prediction can be enabled in the front-end by setting the parameter kUseDepthEstimatorInFrontEnd in pyslam/config_parameters.py. This feature estimates depth images from input color images to emulate a RGBD camera. Please, note this functionality is still experimental at present time [WIP].
Notes: * In the case of a monocular SLAM, do NOT use depth prediction in the back-end volumetric integration: The SLAM (fake) scale will conflict with the absolute metric scale of depth predictions. With monocular datasets, you can enable depth prediction to run in the front-end (to emulate an RGBD sensor). - Depth inference may be very slow (for instance, with DepthPro it takes ~1s per image on a typical machine). Therefore, the resulting volumetric reconstruction pipeline may be very slow.
Refer to the file depth_estimation/depth_estimator_factory.py for further details. Both stereo and monocular prediction approaches are supported. You can test depth prediction/estimation by using the script main_depth_prediction.py.
Semantic mapping
The semantic mapping pipeline can be enabled by setting the parameter kDoSemanticMapping=True in pyslam/config_parameters.py. The best way of configuring the semantic mapping module used is to modify it in pyslam/semantics/semantic_mapping_configs.py.
Different semantic mapping methods are available (see here for furthere details). Currently, we support semantic mapping using dense semantic segmentation.
- DEEPLABV3: from torchvision, pre-trained on COCO/VOC.
- SEGFORMER: from transformers, pre-trained on Cityscapes or ADE20k.
- CLIP: from f3rm package for open-vocabulary support.
Semantic features are assigned to keypoints on the image and fused into map points. The semantic features can be: - Labels: categorical labels as numbers. - Probability vectors: probability vectors for each class. - Feature vectors: feature vectors obtained from an encoder. This is generally used for open vocabulary mapping.
The simplest way to test the available segmentation models is to run: test/semantics/test_semantic_segmentation.py.
Saving and reloading
Save the a map
When you run the script main_slam.py (main_map_dense_reconstruction.py):
- You can save the current map state by pressing the button Save on the GUI. This saves the current map along with front-end, and backend configurations into the default folder results/slam_state (results/slam_state_dense_reconstruction).
- To change the default saving path, open config.yaml and update target folder_path in the section:
bash
SYSTEM_STATE:
folder_path: results/slam_state # default folder path (relative to repository root) where the system state is saved or reloaded
Reload a saved map and relocalize in it
A saved map can be loaded and visualized in the GUI by running:
bash . pyenv-activate.sh # Activate pyslam python virtual environment. This is only needed once in a new terminal. ./main_map_viewer.py # Use the --path options to change the input pathTo enable map reloading and relocalization when running
main_slam.py, openconfig.yamland setbash SYSTEM_STATE: load_state: True # Flag to enable SLAM state reloading (map state + loop closing state) folder_path: results/slam_state # Default folder path (relative to repository root) where the system state is saved or reloaded
Note that pressing the Save button saves the current map, front-end, and backend configurations. Reloading a saved map replaces the current system configurations to ensure descriptor compatibility.
Trajectory saving
Estimated trajectories can be saved in three formats: TUM (The Open Mapping format), KITTI (KITTI Odometry format), and EuRoC (EuRoC MAV format). pySLAM saves two types of trajectory estimates:
- Online: In online trajectories, each pose estimate depends only on past poses. A pose estimate is saved at the end of each front-end iteration for the current frame.
- Final: In final trajectories, each pose estimate depends on both past and future poses. A pose estimate is refined multiple times by LBA windows that include it, as well asby PGO and GBA during loop closures.
To enable trajectory saving, open config.yaml and search for the SAVE_TRAJECTORY: set save_trajectory: True, select your format_type (tum, kitti, euroc), and the output filename. For instance for a kitti format output:
bash
SAVE_TRAJECTORY:
save_trajectory: True
format_type: kitti # Supported formats: `tum`, `kitti`, `euroc`
output_folder: results/metrics # Relative to pyslam root folder
basename: trajectory # Basename of the trajectory saving output
Graph optimization engines
Currently, pySLAM supports both g2o and gtsam for graph optimization, with g2o set as the default engine. You can enable gtsam by setting to True the following parameters in pyslam/config_parameters.py:
python
# Optimization engine
kOptimizationFrontEndUseGtsam = True
kOptimizationBundleAdjustUseGtsam = True
kOptimizationLoopClosingUseGtsam = True
Additionally, the gtsam_factors package provides custom Python bindings for features not available in the original gtsam framework. See here for further details.
SLAM GUI
Some quick information about the non-trivial GUI buttons of main_slam.py:
- Step: Enter in the Step by step mode. Press the button Step a first time to pause. Then, press it again to make the pipeline process a single new frame.
- Save: Save the map into the file map.json. You can visualize it back by using the script /main_map_viewer.py (as explained above).
- Reset: Reset SLAM system.
- Draw Ground Truth: If a ground truth dataset (e.g., KITTI, TUM, EUROC, or REPLICA) is loaded, you can visualize it by pressing this button. The ground truth trajectory will be displayed in 3D and will be progressively aligned with the estimated trajectory, updating approximately every 10-30 frames. As more frames are processed, the alignment between the ground truth and estimated trajectory becomes more accurate. After about 20 frames, if the button is pressed, a window will appear showing the Cartesian alignment errors along the main axes (i.e., $ex$, $ey$, $e_z$) and the history of the total $RMSE$ between the ground truth and the aligned estimated trajectories.
Monitor the logs of tracking, local mapping, loop closing and volumetric mapping simultaneously
The logs generated by the modules local_mapping.py, loop_closing.py, loop_detecting_process.py, global_bundle_adjustments.py, and volumetric integrator_<X>.py are collected in the files local_mapping.log, loop_closing.log, loop_detecting.log, gba.log, and volumetric_integrator.log, respectively. These logs files are all stored in the folder logs. At runtime, for debugging purposes, you can individually monitor any of the log files by running the following command:
tail -f logs/<log file name>
Otherwise, to check all logs at the same time, run this tmux-based script:
./scripts/launch_tmux_logs.sh
To launch slam and check all logs, run:
./scripts/launch_tmux_slam.sh
Press CTRL+A and then CTRL+Q to exit from tmux environment.
Evaluating SLAM
Run a SLAM evaluation
The main_slam_evaluation.py script enables automated SLAM evaluation by executing main_slam.py across a collection of datasets and configuration presets. The main input to the script is an evaluation configuration file (e.g., evaluation/configs/evaluation.json) that specifies which datasets and presets to be used. For convenience, sample configurations for the datasets TUM, EUROC and KITTI datasets are already provided in the evaluation/configs/ directory.
For each evaluation run, results are stored in a dedicated subfolder within the results directory, containing all the computed metrics. These metrics are then processed and compared. The final output is a report, available in PDF, LaTeX, and HTML formats, that includes comparison tables summarizing the Absolute Trajectory Error (ATE), the maximum deviation from the ground truth trajectory and other metrics.
You can find some obtained evaluation results here.
pySLAM performances and comparative evaluations
For a comparative evaluation of the "online" trajectory estimated by pySLAM versus the "final" trajectory estimated by ORB-SLAM3, check out this nice notebook. For more details about "online" and "final" trajectories, refer to this section.
Note: Unlike ORB-SLAM3, which only saves the final pose estimates (recorded after the entire dataset has been processed), pySLAM saves both online and final pose estimates. For details on how to save trajectories in pySLAM, refer to this section.
When you click the Draw Ground Truth button in the GUI (see here), you can visualize the Absolute Trajectory Error (ATE or RMSE) history and evaluate both online and final errors up to the current time.
Supported components and models
Supported local features
At present time, the following feature detectors are supported:
* FAST
* Good features to track
* ORB
* ORB2 (improvements of ORB-SLAM2 to ORB detector)
* SIFT
* SURF
* KAZE
* AKAZE
* BRISK
* AGAST
* MSER
* StarDector/CenSurE
* Harris-Laplace
* SuperPoint
* D2-Net
* DELF
* Contextdesc
* LFNet
* R2D2
* Key.Net
* DISK
* ALIKED
* Xfeat
* KeyNetAffNetHardNet (KeyNet detector + AffNet + HardNet descriptor).
The following feature descriptors are supported:
* ORB
* SIFT
* ROOT SIFT
* SURF
* AKAZE
* BRISK
* FREAK
* SuperPoint
* Tfeat
* BOOST_DESC
* DAISY
* LATCH
* LUCID
* VGG
* Hardnet
* GeoDesc
* SOSNet
* L2Net
* Log-polar descriptor
* D2-Net
* DELF
* Contextdesc
* LFNet
* R2D2
* BEBLID
* DISK
* ALIKED
* Xfeat
* KeyNetAffNetHardNet (KeyNet detector + AffNet + HardNet descriptor).
For more information, refer to pyslam/localfeatures/featuretypes.py file. Some of the local features consist of a joint detector-descriptor. You can start playing with the supported local features by taking a look at test/cv/test_feature_manager.py and main_feature_matching.py.
In both the scripts main_vo.py and main_slam.py, you can create your preferred detector-descritor configuration and feed it to the function feature_tracker_factory(). Some ready-to-use configurations are already available in the file localfeatures/featuretracker.configs.py
The function feature_tracker_factory() can be found in the file pyslam/local_features/feature_tracker.py. Take a look at the file pyslam/local_features/feature_manager.py for further details.
N.B.: You just need a single python environment to be able to work with all the supported local features!
Supported matchers
See the file local_features/feature_matcher.py for further details.
Supported global descriptors and local descriptor aggregation methods
Local descriptor aggregation methods
- Bag of Words (BoW): DBoW2, DBoW3. [paper]
- Vector of Locally Aggregated Descriptors: VLAD. [paper]
- Incremental Bags of Binary Words (iBoW) via Online Binary Image Index: iBoW, OBIndex2. [paper]
- Hyperdimensional Computing: HDC. [paper]
NOTE: iBoW and OBIndex2 incrementally build a binary image index and do not need a prebuilt vocabulary. In the implemented classes, when needed, the input non-binary local descriptors are transparently transformed into binary descriptors.
Global descriptors
Also referred to as holistic descriptors:
Different loop closing methods are available. These combines the above aggregation methods and global descriptors. See the file pyslam/loopclosing/loopdetector_configs.py for further details.
Supported depth prediction models
Both monocular and stereo depth prediction models are available. SGBM algorithm has been included as a classic reference approach.
- SGBM: Depth SGBM from OpenCV (Stereo, classic approach)
- Depth-Pro (Monocular)
- DepthAnythingV2 (Monocular)
- RAFT-Stereo (Stereo)
- CREStereo (Stereo)
- MASt3R (Stereo/Monocular)
- MV-DUSt3R (Stereo/Monocular)
Supported volumetric mapping methods
- TSDF with voxel block grid (parallel spatial hashing)
- Incremental 3D Gaussian Splatting. See here and MonoGS for a description of its backend.
Supported semantic segmentation methods
- DeepLabv3: from
torchvision, pre-trained on COCO/VOC. - Segformer: from
transformers, pre-trained on Cityscapes or ADE20k. - CLIP: from
f3rmpackage for open-vocabulary support.
Configuration
Main configuration file
Refer to this section for how to update the main configuration file config.yaml and affect the configuration parameters in pyslam/config_parameters.py.
Datasets
The following datasets are supported:
Dataset | type in config.yaml
--- | ---
KITTI odometry data set (grayscale, 22 GB) | type: KITTI_DATASET
TUM dataset | type: TUM_DATASET
ICL-NUIM dataset | type: ICL_NUIM_DATASET
EUROC dataset | type: EUROC_DATASET
REPLICA dataset | type: REPLICA_DATASET
TARTANAIR dataset | type: TARTANAIR_DATASET
ScanNet dataset | type: SCANNET_DATASET
ROS1 bags | type: ROS1BAG_DATASET
ROS2 bags | type: ROS2BAG_DATASET
Video file | type: VIDEO_DATASET
Folder of images | type: FOLDER_DATASET
Use the download scripts available in the folder scripts to download some of the following datasets.
KITTI Datasets
pySLAM code expects the following structure in the specified KITTI path folder (specified in the section KITTI_DATASET of the file config.yaml). :
bash
sequences
00
...
21
poses
00.txt
...
10.txt
1. Download the dataset (grayscale images) from http://www.cvlibs.net/datasets/kitti/eval_odometry.php and prepare the KITTI folder as specified above
- Select the corresponding calibration settings file (section
KITTI_DATASET: settings:in the fileconfig.yaml)
TUM Datasets
pySLAM code expects a file associations.txt in each TUM dataset folder (specified in the section TUM_DATASET: of the file config.yaml).
- Download a sequence from http://vision.in.tum.de/data/datasets/rgbd-dataset/download and uncompress it.
- Associate RGB images and depth images using the python script associate.py. You can generate your
associations.txtfile by executing:bash python associate.py PATH_TO_SEQUENCE/rgb.txt PATH_TO_SEQUENCE/depth.txt > associations.txt # pay attention to the order! - Select the corresponding calibration settings file (section
TUM_DATASET: settings:in the fileconfig.yaml).
ICL-NUIM Datasets
Follow the same instructions provided for the TUM datasets.
EuRoC Datasets
- Download a sequence (ASL format) from http://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets (check this direct link)
- Use the script
io/generate_euroc_groundtruths_as_tum.shto generate the TUM-like groundtruth filespath + '/' + name + '/mav0/state_groundtruth_estimate0/data.tum'that are required by theEurocGroundTruthclass. - Select the corresponding calibration settings file (section
EUROC_DATASET: settings:in the fileconfig.yaml).
Replica Datasets
- You can download the zip file containing all the sequences by running:
wget https://cvg-data.inf.ethz.ch/nice-slam/data/Replica.zip - Then, uncompress it and deploy the files as you wish.
- Select the corresponding calibration settings file (section
REPLICA_DATASET: settings:in the fileconfig.yaml).
Tartanair Datasets
- You can download the datasets from https://theairlab.org/tartanair-dataset/
- Then, uncompress them and deploy the files as you wish.
- Select the corresponding calibration settings file (section
TARTANAIR_DATASET: settings:in the fileconfig.yaml).
ScanNet Datasets
- You can download the datasets following instructions in http://www.scan-net.org/. You will need to request the dataset from the authors.
- There are two versions you can download:
- A subset of pre-processed data termed as
tasks/scannet_frames_2k: this version is smaller, and more generally available for training neural networks. However, it only includes one frame out of each 100, which makes it unusable for SLAM. The labels are processed by mapping them from the original Scannet label annotations to NYU40. - The raw data: this version is the one used for SLAM. You can download the whole dataset (TBs of data) or specific scenes. A common approach for evaluation of semantic mapping is to use the
scannetv2_val.txtscenes. For downloading and processing the data, you can use the following repository as the original Scannet repository is tested under Python 2.7 and does't support batch downloading of scenes. - Once you have the
color,depth,pose, and (optional for semantic mapping)labelfolders, you should place them following{path_to_scannet}/scans/{scene_name}/[color, depth, pose, label]. Then, configure thebase_pathandnamein the fileconfig.yaml. - Select the corresponding calibration settings file (section
SCANNET_DATASET: settings:in the fileconfig.yaml). NOTE: the RGB images are rescaled to match the depth image. The current intrinsic parametes in the existing calibration file reflect that.
ROS1 bags
- Source the main ROS1
setup.bashafter you have sourced thepyslampython environment. - Set the paths and
ROS1BAG_DATASET: ros_parametersin the fileconfig.yaml. - Select/prepare the correspoding calibration settings file (section
ROS1BAG_DATASET: settings:in the fileconfig.yaml). See the available yaml files in the folderSettingsas an example.
ROS2 bags
- Source the main ROS2
setup.bashafter you have sourced thepyslampython environment. - Set the paths and
ROS2BAG_DATASET: ros_parametersin the fileconfig.yaml. - Select/prepare the correspoding calibration settings file (section
ROS2BAG_DATASET: settings:in the fileconfig.yaml). See the available yaml files in the folderSettingsas an example.
Video and Folder Datasets
You can use the VIDEO_DATASET and FOLDER_DATASET types to read generic video files and image folders (specifying a glob pattern), respectively. A companion ground truth file can be set in the simple format type: Refer to the class SimpleGroundTruth in io/ground_truth.py and check the script io/convert_groundtruth_to_simple.py.
Camera Settings
The folder settings contains the camera settings files which can be used for testing the code. These are the same used in the framework ORB-SLAM2. You can easily modify one of those files for creating your own new calibration file (for your new datasets).
In order to calibrate your camera, you can use the scripts in the folder calibration. In particular:
1. Use the script grab_chessboard_images.py to collect a sequence of images where the chessboard can be detected (set the chessboard size therein, you can use the calibration pattern calib_pattern.pdf in the same folder)
2. Use the script calibrate.py to process the collected images and compute the calibration parameters (set the chessboard size therein)
For more information on the calibration process, see this tutorial or this other link.
If you want to use your camera, you have to:
* Calibrate it and configure WEBCAM.yaml accordingly
* Record a video (for instance, by using save_video.py in the folder calibration)
* Configure the VIDEO_DATASET section of config.yaml in order to point to your recorded video.
References
- "pySLAM: An Open-Source, Modular, and Extensible Framework for SLAM", Luigi Freda
- "pySLAM and slamplay: Modular, Extensible SLAM Tools for Rapid Prototyping and Integration", Luigi Freda RSS 2025 Workshop: Unifying Visual SLAM
- "Semantic pySLAM: Unifying semantic mapping approaches under the same framework", David Morilla-Cabello, Eduardo Montijano
RSS 2025 Workshop: Unifying Visual SLAM
Suggested books: * Multiple View Geometry in Computer Vision by Richard Hartley and Andrew Zisserman * An Invitation to 3-D Vision by Yi-Ma, Stefano Soatto, Jana Kosecka, S. Shankar Sastry * State Estimation for Robotics by Timothy D. Barfoot * Computer Vision: Algorithms and Applications, by Richard Szeliski * Introduction to Visual SLAM by Xiang Gao, Tao Zhang * Deep Learning, by Ian Goodfellow, Yoshua Bengio and Aaron Courville * Neural Networks and Deep Learning, By Michael Nielsen
Suggested material:
* Vision Algorithms for Mobile Robotics by Davide Scaramuzza
* CS 682 Computer Vision by Jana Kosecka
* ORB-SLAM: a Versatile and Accurate Monocular SLAM System by R. Mur-Artal, J. M. M. Montiel, and J. D. Tardos
* Double Window Optimisation for Constant Time Visual SLAM by H. Strasdat, A. J. Davison, J.M.M. Montiel, K. Konolige
* The Role of Wide Baseline Stereo in the Deep Learning World by Dmytro Mishkin
* To Learn or Not to Learn: Visual Localization from Essential Matrices by Qunjie Zhou, Torsten Sattler, Marc Pollefeys, Laura Leal-Taixe
* Awesome local-global descriptors repository
* Introduction to Feature Matching Using Neural Networks
* Visual Place Recognition: A Tutorial
* Bags of Binary Words for Fast Place Recognition in Image Sequences
Moreover, you may want to have a look at the OpenCV guide or tutorials.
Credits
- Pangolin
- g2opy
- ORBSLAM2
- SuperPointPretrainedNetwork
- Tfeat
- Image Matching Benchmark Baselines
- Hardnet
- GeoDesc
- SOSNet
- L2Net
- Log-polar descriptor
- D2-Net
- DELF
- Contextdesc
- LFNet
- R2D2
- BEBLID
- DISK
- Xfeat
- LightGlue
- Key.Net
- Twitchslam
- MonoVO
- VPR_Tutorial
- DepthAnythingV2
- DepthPro
- RAFT-Stereo
- CREStereo and CREStereo-Pytorch
- MonoGS
- mast3r
- mvdust3r
- MegaLoc
- Many thanks to Anathonic for adding the trajectory-saving feature and for the comparison notebook: pySLAM vs ORB-SLAM3.
- Many thanks to David Morilla Cabello for his great work on integrating semantic predictions into pySLAM.
License
pySLAM is released under GPLv3 license. pySLAM contains some modified libraries, each one coming with its license. Where nothing is specified, a GPLv3 license applies to the software.
If you use pySLAM in your projects, please cite this document:
"pySLAM: An Open-Source, Modular, and Extensible Framework for SLAM", Luigi Freda
You may find an updated version of this document here.
Contributing to pySLAM
If you like pySLAM and would like to contribute to the code base, you can report bugs, leave comments and proposing new features through issues and pull requests on github. Feel free to get in touch at luigifreda(at)gmail[dot]com. Thank you!
Roadmap
Many improvements and additional features are currently under development:
- [x] Loop closing
- [x] Relocalization
- [x] Stereo and RGBD support
- [x] Map saving/loading
- [x] Modern DL matching algorithms
- [ ] Object detection
- [ ] Open vocabulary segment (object) detection
- [X] Semantic segmentation [by @dvdmc]
- [X] Dense closed-set labels
- [X] Dense closed-set probability vectors
- [X] Dense open vocabulary feature vectors
- [x] 3D dense reconstruction
- [x] Unified install procedure (single branch) for all OSs
- [x] Trajectory saving
- [x] Depth prediction integration, more models: VGGT, MoGE [WIP]
- [x] ROS support [WIP]
- [x] Gaussian splatting integration
- [x] Documentation [WIP]
- [x] GTSAM integration [WIP]
- [ ] IMU integration
- [ ] LIDAR integration
- [x] XSt3r-based methods integration [WIP]
- [x] Evaluation scripts
- [ ] More camera models
Owner
- Name: Luigi Freda
- Login: luigifreda
- Kind: user
- Company: University of Rome "La Sapienza"
- Website: https://www.luigifreda.com
- Repositories: 2
- Profile: https://github.com/luigifreda
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 59
- Total pull requests: 4
- Average time to close issues: 2 days
- Average time to close pull requests: 14 days
- Total issue authors: 34
- Total pull request authors: 4
- Average comments per issue: 1.76
- Average comments per pull request: 2.75
- Merged pull requests: 2
- Bot issues: 0
- Bot pull requests: 1
Past Year
- Issues: 59
- Pull requests: 4
- Average time to close issues: 2 days
- Average time to close pull requests: 14 days
- Issue authors: 34
- Pull request authors: 4
- Average comments per issue: 1.76
- Average comments per pull request: 2.75
- Merged pull requests: 2
- Bot issues: 0
- Bot pull requests: 1
Top Authors
Issue Authors
- ufukasia (11)
- shashankyld (6)
- Yasintrkmn (3)
- lavishuse1 (3)
- puneethgottam (2)
- vovaekb (2)
- leblond14u (2)
- byran-wang (2)
- JiePaiWs (2)
- laiyuzhi (2)
- Pravindada (2)
- saeejithnair (2)
- AlbertoJaenal (2)
- PetropoulakisPanagiotis (2)
- jrdimale (2)
Pull Request Authors
- dependabot[bot] (7)
- anathonic (2)
- luigifreda (1)
- dvdmc (1)
- kanishkanarch (1)
- emmanuel-ferdman (1)