Science Score: 26.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (14.8%) to scientific vocabulary
Last synced: 7 months ago · JSON representation

Repository

Basic Info
  • Host: GitHub
  • Owner: nesl
  • License: apache-2.0
  • Language: Jupyter Notebook
  • Default Branch: main
  • Size: 51.5 MB
Statistics
  • Stars: 0
  • Watchers: 2
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created over 2 years ago · Last pushed over 2 years ago
Metadata Files
Readme License Citation

readme.md

GDTM-Tracking

GDTM is a new multi-hour dataset collected with a network of multimodal sensors for the indoor geospatial tracking problem. It features time-synchronized steoreo-vision camera, LiDAR camera, mmWave radar, and microphone arrays, as well as ground truth data containing the position and orientations of the sensing target (remote controlled cars on a indoor race track) and the sensor nodes. For details of the dataset please refer to GitHub and PDF (still under review).

This repository contains our baseline applications described in PDF (still under review) built to use GTDM data. It features two architectures (early fusion and late fusion and two choices of sensor sets (camera only and all-modalities) to track the locations of a target RC car.

Note for dataset documentation and pre-processing, please refer to GitHub.

Installation Instuctions

Environment

The code is tested with: Ubuntu 20.04 Anaconda 22.9.0 (for virtual python environment) NVIDIA-driver 525.105.17 The code should be compatible with most Anaconda, NVIDIA-driver, and Ubuntu versions available around 2023/06.

Code Repository Structure

We only release the early fusion, all modalities version of the model. Further variants will be released upon acceptance. Details are described in Baseline 1 section of PDF (Still under review).

As step one, please clone the desired branch using terminal. It is not possible to clone the anonymous repo, and these instructions will be updated before the camera-ready. cd ~/Desktop git clone https://anonymous.4open.science/r/GDTM_Anonymized-4469.git or cd ~/Desktop git clone --branch <branchname> https://anonymous.4open.science/r/GDTM_Anonymized-4469.git

Install Dependencies

First, place the repository folder on Desktop and rename it "mmtracking". mv <path-to-cloned-repository> ~/Desktop/mmtracking Create a new conda environment using cd ~/Desktop/mmtracking conda create -n iobt python=3.9 conda activate iobt Install a few torch and mmcv using pip: pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu113 pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu113/torch1.12.0/index.html Install other dependencies: pip install -r requirements/build.txt Install a few local packages (the terminal should still be in ~/Desktop/mmtracking): pip install -e src/cad pip install -e src/mmdetection pip install -e src/TrackEval pip install -e src/mmclassification pip install -e src/resource_constrained_tracking pip install -e src/yolov7 pip install -v -e .

Data Preparation

Sample dataset

Please visit the data repository for sample data to test this repository. Due to constraints of uploading data to an anonymous google drive, we have only provided two instances of the data, good lighting (view 3) and poor lighting (view 6) under single-view and all modality conditions, for only the test data.

Full dataset

We are going to release the full dataset on a later date. Check for updates at GitHub.

Unzip the data

Please unzip the data, rename it to "mcp-sample-dataset/" and put it on the Desktop. The final data structure should be like following: Desktop/mcp-sample-dataset/ test/ node1/ mmwave.hdf5 realsense.hdf5 respeaker.hdf5 zed.hdf5 node2/ same as node 1 node3/ same as node 1 mocap.hdf5 train/ same as test val/ same as test Note that you only need test/ if you are running test from checkpoints only.

Specify Filepath

Open mmtracking/configs/_base_/datasets/onecarearlyfusion.py In Line 75, Line 114, and Line 153, change the dataroot to absolute path: e.g. ~/Desktop/... -> /home/USER_NAME/Desktop/...

Code Usage

Make Inference Using A Checkpoint (Testing)

Please download the pretrained checkpoints here.

Note that for single-view case (Baseline 1 in the paper), please make sure to use the checkpoints corresponding to the code and data of your choice.

For example, if we use view 3 data (single view, good lighting condition) and master branch code (single view, early fusion, all modalities), we should download "dataset_singleview3.zip".

After downloading the checkpoint, please rename it to logs/ and put it under "mmtracking" folder using this hierachy.

Desktop/mmtracking/ logs/ early_fusion_zed_mmwave_audio/ val epoch_xx.pth latest.pth (to be created) where the "latest.pth" above is created by (in a terminal in early_fusion_zed_mmwave_audio\/): ln -s epoch_40.pth latest.pth

Then, you could run the evaluations by running (still in terminal under ~/Desktop/mmtracking, make sure you have used "conda activate iobt") ``` bash ./tools/testfromconfignlllocal.sh ./configs/mocap/earlyfusionzedmmwaveaudio.py 1

```

Warning: This script will cache the dataset in system memory (/dev/shm) If the dataset loading operation was not successful, or you have changed the dataset in "~/Desktop/mcp-sample-dataset", please make sure to run this line before the "testfromconfignlllocal.sh" above: ``` rm -r /dev/shm/cache_*

```

The visualization results will apprear in mmtracking/logs/early_fusion_early_fusion_zed_mmwave_audio/test_nll/latest_vid.mp4 and numerical results appears at the last two lines of mmtracking/logs/early_fusion_early_fusion_zed_mmwave_audio/test_nll/mean.txt

If you would like to train a model from scratch instead , please refer to the training and scaling sections down below.

Training

Set up the data as instructed by previous sections, and run bash ./tools/train_from_config_local.sh ./configs/mocap/early_fusion_zed_mmwave_audio.py 1 where the last digit indicate the number of GPU you have for training.

Scaling

After training, some additional data is required to perform a post-hoc model recalibration as described in the paper to better capture model prediction uncertainties. More specifically, We apply an affine transformation = a + bI to the output covariance matrix with parameters a and b that minimize the calibration datas NLL.

Instructions for scaling: bash ./tools/val_from_config_local.sh ./configs/mocap/early_fusion_zed_mmwave_audio.py 1 The last digit must be "1". Scaling with multiple GPU will cause an error.

Troubleshooting

Here we list a few files to change in case some error happens during your configurations.

Data not found error

This is where the filepath are stored mmtracking/configs/_base_/datasets/onecarearly_fusion.py

Don't forget to do "rm -r /dev/shm/cache_*" after you fix this error. Otherwise a "List out of range" error will pop up.

GPU OOM Error, Number of Epoches, Inteval of checkpoints

mmtracking/configs/mocap/earlyfusionzedmmwaveaudio.py Reduce "samplespergpu" in Line 127 helps with OOM error. Line 169-187 changes the training configurations.

This configuration also defines (1) the valid modalities (2) backbone, adapter, and output head architecture hyperparameters

Something wrong with dataset caching

mmtracking/mmtrack/datasets/mocap/cacher.py

Something wrong with model training/inferences

mmtracking/mmtrack/models/mocap/earlyfusion.py Function forwardtrain() for training Fuction forward_track() for testing

Something wrong with final visualzations

mmtracking/mmtrack/datasets/mocap/hdf5dataset.py in function writevideos()

Backbone definitions

mmtracking/mmtrack/models/backbones/tv_r50.py

Citation and Acknowledgements

@misc{mmtrack2020, title={{MMTracking: OpenMMLab} video perception toolbox and benchmark}, author={MMTracking Contributors}, howpublished = {\url{https://github.com/open-mmlab/mmtracking}}, year={2020} } ```

Owner

  • Name: UCLA Networked & Embedded Systems Laboratory
  • Login: nesl
  • Kind: organization
  • Location: Los Angeles, CA

GitHub Events

Total
Last Year

Dependencies

docker/Dockerfile docker
  • pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
src/mmclassification/docker/Dockerfile docker
  • pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
src/mmclassification/docker/serve/Dockerfile docker
  • pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
src/mmdetection/docker/Dockerfile docker
  • pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
src/mmdetection/docker/serve/Dockerfile docker
  • pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
src/yolov7/utils/google_app_engine/Dockerfile docker
  • gcr.io/google-appengine/python latest build
mmtrack.egg-info/requires.txt pypi
  • asynctest *
  • attributee ==0.1.5
  • codecov *
  • cython ==0.29.34
  • dotty_dict *
  • flake8 *
  • future ==0.18.3
  • h5py ==3.8.0
  • interrogate *
  • ipdb ==0.13.13
  • isort ==4.3.21
  • kwarray *
  • lap *
  • matplotlib *
  • mmcls >=0.16.0
  • motmetrics *
  • motmetrics ==1.4.0
  • numpy ==1.24.3
  • opencv-python *
  • packaging *
  • pycocotools <=2.0.2
  • pyro-ppl ==1.8.0
  • pytest *
  • seaborn ==0.12.2
  • seaborn *
  • six *
  • tensorboard ==2.13.0
  • terminaltables *
  • tqdm ==4.65.0
  • tqdm *
  • ubelt *
  • xdoctest >=0.10.0
  • yapf *
requirements/build.txt pypi
  • cython ==0.29.34
  • future ==0.18.3
  • h5py ==3.8.0
  • ipdb ==0.13.13
  • motmetrics ==1.4.0
  • numpy ==1.24.3
  • pyro-ppl ==1.8.0
  • seaborn ==0.12.2
  • tensorboard ==2.13.0
  • tqdm ==4.65.0
requirements/docs.txt pypi
  • recommonmark *
  • sphinx ==4.0.2
  • sphinx-copybutton *
  • sphinx_markdown_tables *
requirements/mminstall.txt pypi
  • mmcls >=0.14.0
  • mmcv-full >=1.3.8,<1.4.0
  • mmdet >=2.14.0,<3.0.0
requirements/readthedocs.txt pypi
  • mmcls *
  • mmcv *
  • mmdet *
  • torch *
  • torchvision *
requirements/runtime.txt pypi
  • attributee ==0.1.5
  • dotty_dict *
  • lap *
  • matplotlib *
  • mmcls >=0.16.0
  • motmetrics *
  • opencv-python *
  • packaging *
  • pycocotools <=2.0.2
  • seaborn *
  • six *
  • terminaltables *
  • tqdm *
requirements/tests.txt pypi
  • asynctest * test
  • codecov * test
  • flake8 * test
  • interrogate * test
  • isort ==4.3.21 test
  • kwarray * test
  • pytest * test
  • ubelt * test
  • xdoctest >=0.10.0 test
  • yapf * test
requirements.txt pypi
setup.py pypi
src/TrackEval/minimum_requirements.txt pypi
  • numpy ==1.18.1
  • scipy ==1.4.1
src/TrackEval/pyproject.toml pypi
src/TrackEval/requirements.txt pypi
  • Pillow ==8.1.2
  • matplotlib ==3.2.1
  • numpy ==1.18.1
  • opencv_python ==4.4.0.46
  • pycocotools ==2.0.2
  • pytest ==6.0.1
  • scikit_image ==0.16.2
  • scipy ==1.4.1
src/TrackEval/setup.py pypi
src/TrackEval/trackeval.egg-info/requires.txt pypi
  • numpy *
  • scipy *
src/cad/setup.py pypi
src/mmclassification/requirements/docs.txt pypi
  • docutils ==0.16.0
  • myst-parser *
  • pytorch_sphinx_theme *
  • sphinx ==4.0.2
  • sphinx-copybutton *
  • sphinx_markdown_tables *
src/mmclassification/requirements/mminstall.txt pypi
  • mmcv-full >=1.4.2,<=1.5.0
src/mmclassification/requirements/optional.txt pypi
  • albumentations >=0.3.2
  • colorama *
  • requests *
  • rich *
src/mmclassification/requirements/readthedocs.txt pypi
  • mmcv >=1.4.2
  • torch *
  • torchvision *
src/mmclassification/requirements/runtime.txt pypi
  • matplotlib *
  • numpy *
  • packaging *
src/mmclassification/requirements/tests.txt pypi
  • codecov * test
  • flake8 * test
  • interrogate * test
  • isort ==4.3.21 test
  • mmdet * test
  • pytest * test
  • xdoctest >=0.10.0 test
  • yapf * test
src/mmclassification/requirements.txt pypi
src/mmclassification/setup.py pypi
src/mmdetection/requirements/albu.txt pypi
  • albumentations >=0.3.2
src/mmdetection/requirements/build.txt pypi
  • cython *
  • numpy *
src/mmdetection/requirements/docs.txt pypi
  • docutils ==0.16.0
  • recommonmark *
  • sphinx ==4.0.2
  • sphinx-copybutton *
  • sphinx_markdown_tables *
  • sphinx_rtd_theme ==0.5.2
src/mmdetection/requirements/mminstall.txt pypi
  • mmcv-full >=1.3.17
src/mmdetection/requirements/optional.txt pypi
  • cityscapesscripts *
  • imagecorruptions *
  • scipy *
  • sklearn *
  • timm *
src/mmdetection/requirements/readthedocs.txt pypi
  • mmcv *
  • torch *
  • torchvision *
src/mmdetection/requirements/runtime.txt pypi
  • matplotlib *
  • numpy *
  • pycocotools *
  • six *
  • terminaltables *
src/mmdetection/requirements/tests.txt pypi
  • asynctest * test
  • codecov * test
  • flake8 * test
  • interrogate * test
  • isort ==4.3.21 test
  • kwarray * test
  • onnx ==1.7.0 test
  • onnxruntime >=1.8.0 test
  • pytest * test
  • ubelt * test
  • xdoctest >=0.10.0 test
  • yapf * test
src/mmdetection/requirements.txt pypi
src/mmdetection/setup.py pypi
src/resource_constrained_tracking/setup.py pypi
src/yolov7/requirements.txt pypi
  • Pillow >=7.1.2
  • PyYAML >=5.3.1
  • ipython *
  • matplotlib >=3.2.2
  • numpy >=1.18.5,<1.24.0
  • opencv-python >=4.1.1
  • pandas >=1.1.4
  • protobuf <4.21.3
  • psutil *
  • requests >=2.23.0
  • scipy >=1.4.1
  • seaborn >=0.11.0
  • tensorboard >=2.4.1
  • thop *
  • torch >=1.7.0,
  • torchvision >=0.8.1,
  • tqdm >=4.41.0
src/yolov7/setup.py pypi
src/yolov7/utils/google_app_engine/additional_requirements.txt pypi
  • Flask ==1.0.2
  • gunicorn ==19.9.0
  • pip ==18.1