Science Score: 44.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (9.3%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

Basic Info
  • Host: GitHub
  • Owner: JennySeidenschwarz
  • License: apache-2.0
  • Language: Python
  • Default Branch: main
  • Size: 19.8 MB
Statistics
  • Stars: 1
  • Watchers: 1
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created over 2 years ago · Last pushed almost 2 years ago
Metadata Files
Readme Contributing License Code of conduct Citation

README.md

Training Downstream Detector

Directory forked from mmdetection3d. It is adapted to load feather files as input for training and evaluation on pointpillars model. The performance is automatically evaluated using the evaluation code from SeMoLi. For training we use the standard hyperparameters for pointpillars training on Waymo Open Dataset. We support training on Waymo Open Dataset and Argoverse2 dataset. During training and evaluation we only use points and labels within a 100mx40m rectangle around the ego-vehicle.

Installation

If you installed the conda environment from SeMoLi, all libraries are already installed and you can run the code if you activate conda activate SeMoLi. Otherwise, perpare a conda evironment running the following:

conda create -n mmdetection3d python=3.9 conda activate mmdetection3d bash setup.sh

Running the code

In this repository we follow the data split convention of SeMoLi given by: data split figure according to SeMoLi

Set Data Variables

For training and evaluation, you can use either pseudo-labels stored in a feather file or ground truth labels loaded from a feather file. et the train and validation label paths to the corresponding feather file by running: export TRAIN_LABELS=<train_label_path> export VAL_LABELS=<val_label_path>

For validation, set the path to the feather file containing ground truth data. If you want to use the val_detector dataset from SeMoLi for evaluation, set the path to the feather file containing ground truth training data. If you want to use the real validation set, i.e., the val_evaluation split set the path to the file containing validation set ground truth data.

Waymo Open Dataset

For example, if you are using this repository within the SeMoLi repository, the ground truth train and real validation set paths for Waymo Open Dataset can be set by:

export TRAIN_LABELS=../SeMoLi/data_utils/Waymo_Converted_filtered/train_1_per_frame_remove_non_move_remove_far_filtered_version_city_w0.feather export VAL_LABELS=../SeMoLi/data_utils/Waymo_Converted_filtered/val_1_per_frame_remove_non_move_remove_far_filtered_version_city_w0.feather

AV2

For AV2 dataset the paths can be set to export TRAIN_LABELS=../SeMoLi/data_utils/AV2_filtered/train_1_per_frame_remove_non_move_remove_far_filtered_version_city_w0.feather export VAL_LABELS=../SeMoLi/data_utils/AV2_filtered/val_1_per_frame_remove_non_move_remove_far_filtered_version_city_w0.feather

Training and Evaluation

The base command for training and evaluation on Waymo Open Dataset is given by:

Class Agnostic Training and Evaluation

./tools/dist_train.sh configs/pointpillars/pointpillars_hv_secfpn_sbn-all_8xb4-2x_waymo-3d-class_agnostic.py <num_gpus> <percentage_train> <percentage_val> $TRAIN_LABELS $VAL_LABELS --eval --val_detection_set=val_evaluation --auto-scale-lr where - num_gpus is the number of GPUs that are used for the training - percentage_train is the percentage of training data you want to use for training according to SeMoLi splits, i.e., percentagetrain corresponds to x in the above figure. Hence if x=0.1 and the split is ```traindetector, the actual percentage of the data used is1-0.1=0.9 -percentagevalis 1.0 if you want to use eithervaldetectoror the real validation setvalevaluation. If you want to use any part of thetraindetectorortraingnnfor evaluation, please set the percentage according to SeMoLi -evalif eval is set, you will only evaluate and not train -valdetectiondetermines the detection split you want to use, i.e.,valdetectoror the real validation setvalgnn -auto-scale-lr``` adapts the learning rate to the batch size according to a given base learning rate

Labeled and Unlabeled Data

If you want to use labeled and unlabeled data together, set the train data path to the pseudo labels and set a second path for the labeled data:

export TRAIN_LABELS=<path_to_pseudo_labels> export TRAIN_LABELS2=../SeMoLi/data_utils/AV2_filtered/train_1_per_frame_remove_non_move_remove_far_filtered_version_city_w0.feather Then run the following: ``` ./tools/disttrain.sh configs/pointpillars/pointpillarshvsecfpnsbn-all8xb4-2xwaymo-3d-classagnostic.py <numgpus> $TRAINLABELS $VALLABELS --labelpath2 $TRAINLABELS2--valdetectionset=val_evaluation --auto-scale-lr

```

AV2 Dataset Training and Evaluation

For AV2 dataset set the dataset paths as above and change the config file to configs/pointpillars/pointpillars_hv_secfpn_sbn-all_8xb4-2x_av2-3d-class_agnostic.py: ./tools/dist_train.sh configs/pointpillars/pointpillars_hv_secfpn_sbn-all_8xb4-2x_av2-3d-class_agnostic.py <num_gpus> <percentage_train> <percentage_val> $TRAIN_LABELS $VAL_LABELS --val_detection_set=val_evaluation --auto-scale-lr

Class Specific Training (only for Waymo Open Dataset currently)

For training in a class specific setting with ground truth data, change the config file to pointpillars_hv_secfpn_sbn-all_8xb4-2x_waymo-3d-class_specific.py

./tools/dist_train.sh configs/pointpillars/pointpillars_hv_secfpn_sbn-all_8xb4-2x_waymo-3d-class_specific.py <num_gpus> <percentage_train> <percentage_val> $TRAIN_LABELS $VAL_LABELS --val_detection_set=val_evaluation --auto-scale-lr

Owner

  • Login: JennySeidenschwarz
  • Kind: user

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
  - name: "MMDetection3D Contributors"
title: "OpenMMLab's Next-generation Platform for General 3D Object Detection"
date-released: 2020-07-23
url: "https://github.com/open-mmlab/mmdetection3d"
license: Apache-2.0

GitHub Events

Total
Last Year

Dependencies

.github/workflows/deploy.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v2 composite
.github/workflows/lint.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v2 composite
.github/workflows/merge_stage_test.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
  • codecov/codecov-action v1.0.14 composite
.github/workflows/pr_stage_test.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
  • codecov/codecov-action v1.0.14 composite
.github/workflows/test_mim.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
.circleci/docker/Dockerfile docker
  • pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
docker/Dockerfile docker
  • pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
docker/serve/Dockerfile docker
  • pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build