anytraverse
Using CLIPSeg and DepthAnythingV2 to traverse vehicles in offroad environments
Science Score: 54.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (12.5%) to scientific vocabulary
Repository
Using CLIPSeg and DepthAnythingV2 to traverse vehicles in offroad environments
Basic Info
- Host: GitHub
- Owner: sattwik-sahu
- License: gpl-3.0
- Language: Python
- Default Branch: main
- Size: 88.5 MB
Statistics
- Stars: 1
- Watchers: 2
- Forks: 0
- Open Issues: 0
- Releases: 0
Metadata Files
README.md
AnyTraverse
An Offroad Traversability Framework with VLM and Human Operator in the Loop
Installation
- Install PyTorch: AnyTraverse requires
torchandtorchvisionto be installed. Install compatibletorchandtorchvisionversions for your platform before proceeding.bash uv add torch torchvision # uv users pip install torch torchvision # pip users> :warning: PyTorch does not provide wheels for the NVIDIA Jetson platform. Please ensure you have installed te compatible versions oftorch,torchvisionfor your Jetson device for GPU acceleration. - Install AnyTraverse: Install
anytraverseusing your python dependency manager. We recommend usinguvbash uv add anytraverse # uv users pip install anytraverse # pip users
Optional
- AnyTraverse allows you to bring your own vision-language models (VLMs) and image embedding models and use them by creating wrappers. However, it also ships with some wrappers for models on HuggingFace.
- To use these models, install the
transformerspackage with additional dependencies.bash uv add transformers einops acccelerate # uv users pip install transformers einops accelerate # pip users
Usage
Quickstart
This example explains how to get started with the implementation discussed in the original paper. To get the implementation from the paper running, use the function provided.
```python from anytraverse import buildpipelinefrom_paper
def main(): # Load the image url = "https://source.roboflow.com/oWTBJ1yeWRbHDXbzJBrOsPVaoH92/0C8goYvWpiqF26dNKxby/original.jpg" image = PILImage.open(requests.get(url, stream=True).raw)
# Build the pipeline from the paper
anytraverse = build_pipeline_from_paper(
init_traversabilty_preferences={
"road": 1, "bush": -0.8, "rock": 0.45
},
ref_scene_similarity_threshold=0.8,
roi_uncertainty_threshold=0.3,
roi_x_bounds=(0.333, 0.667),
roi_y_bounds=(0.6, 0.95),
)
# Take one step
state = anytraverse.step(image=image)
# Plot the attention maps
fig, ax = plt.subplots(1, 3, figsize=(15, 5))
for attn_map, prompt, ax_ in zip(state.attention_maps, state.traversability_preferences, ax):
ax_.imshow(image)
ax_.imshow(attn_map.cpu(), cmap="plasma", alpha=0.4)
ax_.set_title(prompt)
ax_.axis("off")
plt.show()
# See the traversability and uncertainty maps
fig, ax = plt.subplots(1, 2, figsize=(16, 9))
(x0, y0), (x1, y1) = state.roi_bbox
rects = [
patches.Rectangle(
(x0, y0),
x1 - x0,
y1 - y0,
edgecolor="#ffffff",
facecolor="#ffffff22",
linewidth=4,
)
for _ in range(2)
]
for ax_, m, r_roi, title, rect in zip(
ax,
(state.traversability_map, state.uncertainty_map),
(state.traversability_map_roi.mean(), state.uncertainty_map_roi.mean()),
("Traversability Map", "Uncertainty Map"),
rects,
):
ax_.imshow(image)
map_plot = ax_.imshow(m.cpu(), alpha=0.5)
ax_.add_patch(rect)
ax_.text(
x0,
y0 - 15,
f"ROI {title.split(' ')[0]}: {r_roi * 100.0:.2f}%",
size=18,
color="#ffffff",
)
ax_.axis("off")
ax_.set_title(title, fontsize=22)
cbar = plt.colorbar(map_plot, orientation="horizontal", pad=0.01)
cbar.set_label(f"{title.split(' ')[0]} Score", fontsize=12)
for t in cbar.ax.get_xticklabels():
t.set_fontsize(10)
fig.tight_layout()
plt.show()
if name == "main": main() ```
Attention Maps

Traversability and uncertainty maps

Make your own AnyTraverse
- AnyTraverse is modular and the modules from the original paper can be swapped with your own implementation easily.
- The VLM, image encoder, traversability pooling and uncertainty pooling modules can be replaced with your own implementation, by extending abstract base classes provided in the
anytraversepackage. - Refer to the extended documentation to learn more.
NOTE: Extended documentation coming soon...
Contributing
We'd love to see your implementations and modifications to help make AnyTraverse better. Please create a pull request (branch name: dev/feat/<your-feature-name>) to add a new feature and raise and issue to request a new feature.
Made with :heart: in IISER Bhopal.
Owner
- Name: Sattwik Kumar Sahu
- Login: sattwik-sahu
- Kind: user
- Repositories: 2
- Profile: https://github.com/sattwik-sahu
1st Year BS Engineering Sciences student at Indian Institute of Science, Education and Research, Bhopal Member of Computer and Networking Club (CNC)
Citation (CITATION.cff)
cff-version: 1.2.0
title: "AnyTraverse: An off-road traversability framework with VLM and human operator in the loop"
authors:
- family-names: Sahu
given-names: Sattwik Kumar
- family-names: Singh
given-names: Agamdeep
- family-names: Nambiar
given-names: Karthik
- family-names: Saripalli
given-names: Srikanth
- family-names: Sujit
given-names: P. B.
date-released: 2025-06-20
version: 1.0.0
doi: 10.48550/arXiv.2506.16826
url: https://arxiv.org/abs/2506.16826
repository-code: https://github.com/sattwik-sahu/AnyTraverse
message: "If you use this package, please cite our arXiv paper."
GitHub Events
Total
- Push event: 1
- Public event: 1
- Pull request event: 1
Last Year
- Push event: 1
- Public event: 1
- Pull request event: 1
Dependencies
- numpy >=1.26.4
- pillow >=11.2.1
- 149 dependencies