birds-eye-view-trajectory-prediction-for-autonomous-driving

This repository contains our work on a comprehensive investigation on motion prediction for Autonomous Vehicles using the PowerBEV framework and a Multi-Camera setup. Validated trajectory forecasting capabilities on the NuScenes, Woven and Argoverse datasets and identified challenges in model generalization across these datasets.

https://github.com/rishikesh-jadhav/birds-eye-view-trajectory-prediction-for-autonomous-driving

Science Score: 26.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (10.4%) to scientific vocabulary

Keywords

analysis argoverse computer-vision deep-learning instance-prediction machine-learning motion-planning nuscenes pytorch trajectory-prediction
Last synced: 7 months ago · JSON representation

Repository

This repository contains our work on a comprehensive investigation on motion prediction for Autonomous Vehicles using the PowerBEV framework and a Multi-Camera setup. Validated trajectory forecasting capabilities on the NuScenes, Woven and Argoverse datasets and identified challenges in model generalization across these datasets.

Basic Info
  • Host: GitHub
  • Owner: Rishikesh-Jadhav
  • License: other
  • Language: Python
  • Default Branch: main
  • Homepage:
  • Size: 120 MB
Statistics
  • Stars: 9
  • Watchers: 1
  • Forks: 2
  • Open Issues: 1
  • Releases: 0
Topics
analysis argoverse computer-vision deep-learning instance-prediction machine-learning motion-planning nuscenes pytorch trajectory-prediction
Created over 2 years ago · Last pushed almost 2 years ago
Metadata Files
Readme License Citation

README.md

PowerBEV - Woven Dataset

  • Contributors : Aman Sharma , Vyshnav Achuthan , Neha Madhekar , Rishikesh Jadhav , Wu Xiyang

Contents

Setup

Create the conda environment by running conda env create -f environment.yml

Dataset

  • Download the full Toyota Woven Planet Perception datset, which includes the Mini dataset and the Train and Test dataset.
  • Extract the tar files to a directory named lyft2/ . The files should be organized in the following structure: lyft2/ train/ maps/ images/ train_lidar/ train_data/

Pre-trained models (Comparision)

The config file can be found in powerbev/configs . You can download the pre-trained models which are finetuned for nuscenes dataset using the below links:

|Weights | Dataset | BEV Size | IoU | VPQ | |-|-|-|:-:|:-:| |PowerBEV_long.ckpt | NuScenes| 100m x 100m (50cm res.) | 39.3 | 33.8 | | PowerBEV_short.ckpt | NuScenes| 30m x 30m (15cm res.) | 62.5 | 55.5 |
| PowerBEV_static_long.ckpt| None | 100m x 100m (50cm res.) | 39.3 | 33.8 | | PowerBEV_static_short.ckpt| None | 30m x 30m (15cm res.) | 62.5 | 55.5 |

Training

To train the model from scratch on Woven, run

python train.py --config powerbev/configs/powerbev.yml and make sure you make the respective changes on the config.yaml file inside configs folder.

For running on pretrained weights

python train.py --config powerbev/configs/powerbev.yml \ PRETRAINED.LOAD_WEIGHTS True \ PRETRAINED.PATH $YOUR_PRETRAINED_STATIC_WEIGHTS_PATH

Prediction

Evaluation

To run from the model which was trained from scratch just search for the tensorboard log file which will have the ckpt file and add that ckpt file path as your pretrained weights path.

python test.py --config powerbev/configs/powerbev.yml \ PRETRAINED.LOAD_WEIGHTS True \ PRETRAINED.PATH $YOUR_PRETRAINED_WEIGHTS_PATH

Visualisation

To run from the model which was trained from scratch just search for the tensorboard log file which will have the ckpt file and add that ckpt file path as your pretrained weights path. python visualise.py --config powerbev/configs/powerbev.yml \ PRETRAINED.LOAD_WEIGHTS True \ PRETRAINED.PATH $YOUR_PRETRAINED_WEIGHTS_PATH \ BATCHSIZE 1 This will render predictions from the network and save them to an visualization_outputs folder.

License

PowerBEV is released under the MIT license. Please see the LICENSE file for more information.

Credits

This is the official PyTorch implementation of the paper:

PowerBEV: A Powerful yet Lightweight Framework for Instance Prediction in Bird's-Eye View
Peizheng Li, Shuxiao Ding,Xieyuanli Chen,Niklas Hanselmann,Marius Cordts,Jrgen Gall

Owner

  • Name: Rishikesh Jadhav
  • Login: Rishikesh-Jadhav
  • Kind: user

Robotics Masters student at the University of Maryland - College Park

GitHub Events

Total
  • Watch event: 4
Last Year
  • Watch event: 4

Committers

Last synced: 10 months ago

All Time
  • Total Commits: 5
  • Total Committers: 1
  • Avg Commits per committer: 5.0
  • Development Distribution Score (DDS): 0.0
Past Year
  • Commits: 0
  • Committers: 0
  • Avg Commits per committer: 0.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
Rishikesh Jadhav 9****v 5

Issues and Pull Requests

Last synced: 10 months ago

All Time
  • Total issues: 1
  • Total pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Total issue authors: 1
  • Total pull request authors: 0
  • Average comments per issue: 0.0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels

Dependencies

environment.yml pypi
  • lyft-dataset-sdk ==0.0.8
  • moviepy ==1.0.3
  • nuscenes-devkit ==1.1.0
  • opencv-python ==4.5.1.48
  • thop ==0.1.1