Science Score: 67.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 1 DOI reference(s) in README -
✓Academic publication links
Links to: arxiv.org, springer.com -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (13.2%) to scientific vocabulary
Keywords
3d-vision
image-classification
pose-estimation
Last synced: 6 months ago
·
JSON representation
·
Repository
Neural mesh models for 3D reasoning.
Basic Info
- Host: GitHub
- Owner: wufeim
- Language: Python
- Default Branch: master
- Homepage: https://wufeim.github.io/NeMo/
- Size: 28.5 MB
Statistics
- Stars: 9
- Watchers: 4
- Forks: 1
- Open Issues: 4
- Releases: 0
Topics
3d-vision
image-classification
pose-estimation
Created about 3 years ago
· Last pushed over 2 years ago
Metadata Files
Readme
Citation
README.rst
====
NeMo
====
.. image:: https://img.shields.io/pypi/v/neural-mesh-model.svg
:target: https://pypi.python.org/pypi/neural-mesh-model
.. image:: https://readthedocs.org/projects/neural-mesh-model/badge/?version=latest
:target: https://neural-mesh-model.readthedocs.io/en/latest/?version=latest
:alt: Documentation Status
This is the repo for the series works on `Neural Mesh Models `_. In this repo, we implement `3D object pose estimation `_, `3D object pose estimation via VoGE renderer `_, `6D pose object estimation `_, `object classification `_, and `cross domain training `_. The original implementation of NeMo is `here `_.
Release Note on Sept 10 (by Angtian)
--------
Introduce a major refactor, main for feature banks and "mask remove" functions. In the new implementation, the feature banks support multiple objects in each training image with different class labels.
Note the implementation of multiple classes is CUDA-based, which requires installation of the specific CUDA layer. (Running 3D pose only should be fine without these CUDA layers.) To install:
.. code::
cd cu_layers
python setup.py install
After the installation, you will find a lib named "CuNeMo" in your Python libs.
Previous configs should be compatible except for changes in config/model
.. code::
memory_bank:
class_name: nemo.models.feature_banks.FeatureBankNeMo
The previous implementation of classification NeMo is removed, we will add support for classification NeMo very soon. Contact me directly if you find any bugs or compatibility issues.
Features
--------
**Easily train and evaluate neural mesh models for multiple tasks:**
* 3D pose estimation
* 6D poes estimation
* 3D-aware image classification
* Amodal segmenation
**Experiment on various benchmark datasets:**
* PASCAL3D+
* Occluded PASCAL3D+
* ObjectNet3D
* OOD-CV
* SyntheticPASCAL3D+
**Reproduce baseline models for fair comparison:**
* Regression-based models (ResNet50, Faster R-CNN, etc.)
* Transformers
* StarMap
Installation
------------
Environment (manual setup)
^^^^^^^^^^^^^^^^^^^^^^^^^^
1. Create :code:`conda` environment:
.. code::
conda create -n nemo python=3.9
conda activate nemo
2. Install :code:`PyTorch` (see `pytorch.org `_):
.. code::
conda install pytorch==1.12.0 torchvision==0.13.0 torchaudio==0.12.0 cudatoolkit=10.2 -c pytorch
3. Install :code:`PyTorch3D` (see `github.com/facebookresearch/pytorch3d `_):
.. code::
conda install -c fvcore -c iopath -c conda-forge fvcore iopath
conda install -c bottler nvidiacub
conda install pytorch3d -c pytorch3d
4. Install other dependencies:
.. code::
conda install numpy matplotlib scipy scikit-image
conda install pillow
conda install -c conda-forge timm tqdm pyyaml transformers
pip install git+https://github.com/NVlabs/nvdiffrast/
pip install wget gdown BboxTools opencv-python xatlas pycocotools seaborn wandb
3. (Optional) Install :code:`VoGE` (see `github.com/Angtian/VoGE `_):
.. code::
pip install git+https://github.com/Angtian/VoGE.git
Environment (from `yml`)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In case the previous method failed, setup the environment from a compiled list of packages:
.. code::
conda env create -f environment.yml
pip install git+https://github.com/NVlabs/nvdiffrast/
pip install -e .
Data Preparation
^^^^^^^^^^^^^^^^
See `data/README `_.
Quick Start
-----------
Train and evaluate a neural mesh model (:code:`NeMo`) on PASCAL3D+ for 3D pose estimation:
.. code::
CUDA_VISIBLE_DEVICES=0,1,2,3 python3 scripts/train.py \
--cate car \
--config config/omni_nemo_pose_3d.yaml \
--save_dir exp/pose_estimation_3d_nemo_car
CUDA_VISIBLE_DEVICES=0 python3 scripts/inference.py \
--cate car \
--config config/omni_nemo_pose_3d.yaml \
--save_dir exp/pose_estimation_3d_nemo_car \
--checkpoint exp/pose_estimation_3d_nemo_car/ckpts/model_800.pth
NeMo with VoGE:
.. code::
CUDA_VISIBLE_DEVICES=0,1,2,3 python3 scripts/train.py \
--cate car \
--config config/omni_voge_pose_3d.yaml \
--save_dir exp/pose_estimation_3d_voge_car
CUDA_VISIBLE_DEVICES=0 python3 scripts/inference.py \
--cate car \
--config config/omni_voge_pose_3d.yaml \
--save_dir exp/pose_estimation_3d_voge_car \
--checkpoint exp/pose_estimation_3d_voge_car/ckpts/model_800.pth
NeMo on PASCAL3D+ without scaling during data pre-processing:
.. code::
CUDA_VISIBLE_DEVICES=0,1,2,3 python3 scripts/train.py \
--cate car \
--config config/omni_nemo_pose_3d_ori.yaml \
--save_dir exp/pose_estimation_3d_ori_car
CUDA_VISIBLE_DEVICES=0 python3 scripts/inference.py \
--cate car \
--config config/omni_nemo_pose_3d_ori.yaml \
--save_dir exp/pose_estimation_3d_ori_car \
--checkpoint exp/pose_estimation_3d_ori_car/ckpts/model_800.pth
Train and evaluate a regression-based model (:code:`ResNet50-General`) on PASCAL3D+ for 3D pose estimation:
.. code::
CUDA_VISIBLE_DEVICES=0 python3 scripts/train.py \
--cate all \
--config config/pose_estimation_3d_resnet50_general.yaml \
--save_dir exp/pose_estimation_3d_resnet50_general_car
CUDA_VISIBLE_DEVICES=0 python3 scripts/inference.py \
--cate car \
--config config/pose_estimation_3d_resnet50_general.yaml \
--save_dir exp/pose_estimation_3d_resnet50_general \
--checkpoint exp/pose_estimation_3d_resnet50_general/ckpts/model_90.pth
Pre-trained Models
-------------
Pre-trained Models for 3D pose estimation
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The pre-trained model for NeMo model:
https://drive.google.com/file/d/14fByOZs_Zzd-97Ulk2BKJhVNFKAnFWvg/view?usp=sharing
+---------+-------+-------+------+--------+------+------+-------+-------+-------+------+-------+-------+-------+
| 3D pose | plane | bike | boat | bottle | bus | car | chair | table | mbike | sofa | train | tv | Mean |
+=========+=======+=======+======+========+======+======+=======+=======+=======+======+=======+=======+=======+
| Pi/6 | 86.9 | 80.3 | 77.4 | 90.0 | 95.3 | 98.9 | 89.1 | 80.2 | 86.6 | 95.8 | 64.4 | 82.0 | 87.4 |
| Pi/18 | 55.3 | 30.9 | 50.2 | 56.9 | 91.5 | 96.5 | 56.7 | 63.1 | 33.2 | 65.9 | 55.3 | 48.6 | 65.5 |
| Med | 8.94 | 15.51 | 9.95 | 8.24 | 2.66 | 2.71 | 8.68 | 6.96 | 13.34 | 7.18 | 7.32 | 10.61 | 7.42 |
+---------+-------+-------+------+--------+------+------+-------+-------+-------+------+-------+-------+-------+
The pre-trained model for NeMo-VoGE model:
https://drive.google.com/file/d/1kogFdjVbOIuSlKx1NQ1c1XEjbvJEQWJg/view?usp=sharing
+---------+-------+-------+------+--------+------+------+-------+-------+-------+------+-------+------+-------+
| 3D pose | plane | bike | boat | bottle | bus | car | chair | table | mbike | sofa | train | tv | Mean |
+=========+=======+=======+======+========+======+======+=======+=======+=======+======+=======+======+=======+
| Pi/6 | 87.8 | 82.9 | 75.4 | 88.2 | 97.4 | 99.0 | 90.7 | 83.6 | 87.4 | 94.4 | 91.3 | 80.5 | 89.5 |
| Pi/18 | 62.3 | 36.7 | 51.0 | 55.2 | 94.5 | 96.4 | 54.9 | 69.7 | 39.1 | 65.4 | 83.3 | 54.4 | 69.5 |
| Med | 7.57 | 14.02 | 9.7 | 9.1 | 2.38 | 2.89 | 8.96 | 5.7 | 12.3 | 7.77 | 3.84 | 8.80 | 6.82 |
+---------+-------+-------+------+--------+------+------+-------+-------+-------+------+-------+------+-------+
The pre-trained model for NeMo model without scaling:
https://drive.google.com/file/d/1ybVTDx6DvV_H01SUZkKqWQjKu-BfweGJ/view?usp=sharing
+---------+-------+-------+-------+--------+------+------+-------+-------+-------+------+-------+-------+-------+
| 3D pose | plane | bike | boat | bottle | bus | car | chair | table | mbike | sofa | train | tv | Mean |
+=========+=======+=======+=======+========+======+======+=======+=======+=======+======+=======+=======+=======+
| Pi/6 | 83.0 | 75.7 | 68.3 | 84.5 | 96.2 | 98.8 | 85.8 | 80.4 | 78.1 | 94.6 | 79.2 | 85.8 | 86.0 |
| Pi/18 | 48.0 | 24.7 | 34.0 | 44.3 | 90.0 | 95.4 | 44.6 | 58.5 | 26.6 | 58.8 | 64.0 | 45.6 | 60.2 |
| Med | 10.62 | 18.54 | 14.97 | 11.67 | 3.00 | 3.12 | 11.01 | 8.07 | 15.22 | 8.31 | 6.65 | 11.25 | 8.99 |
+---------+-------+-------+-------+--------+------+------+-------+-------+-------+------+-------+-------+-------+
The pre-trained model for NeMo-VoGE model without scaling:
https://drive.google.com/file/d/10ggpneADVWClXWx42yQeJ_unFt53oQ1I/view?usp=sharing
+---------+-------+-------+-------+--------+------+------+-------+-------+-------+------+-------+-------+-------+
| 3D pose | plane | bike | boat | bottle | bus | car | chair | table | mbike | sofa | train | tv | Mean |
+=========+=======+=======+=======+========+======+======+=======+=======+=======+======+=======+=======+=======+
| Pi/6 | 83.1 | 80.2 | 68.1 | 83.9 | 98.1 | 98.3 | 89.0 | 83.0 | 81.8 | 94.1 | 90.5 | 83.7 | 87.4 |
| Pi/18 | 51.9 | 29.9 | 36.3 | 44.6 | 94.2 | 93.2 | 50.1 | 65.0 | 32.8 | 61.4 | 76.1 | 46.4 | 62.9 |
| Med | 9.56 | 16.33 | 14.97 | 11.07 | 2.92 | 3.75 | 9.97 | 6.70 | 14.06 | 8.03 | 5.45 | 10.70 | 8.51 |
+---------+-------+-------+-------+--------+------+------+-------+-------+-------+------+-------+-------+-------+
Documentation
-------------
See `documentation `_.
Citation
--------
.. code::
@inproceedings{wang2021nemo,
title={NeMo: Neural Mesh Models of Contrastive Features for Robust 3D Pose Estimation},
author={Angtian Wang and Adam Kortylewski and Alan Yuille},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=pmj131uIL9H}
}
@software{nemo_code_2022,
title={Neural Mesh Models for 3D Reasoning},
author={Ma, Wufei and Jesslen, Artur and Wang, Angtian},
month={12},
year={2022},
url={https://github.com/wufeim/NeMo},
version={1.0.0}
}
Further Information
-------------------
This repo builds upon several previous works:
* `NeMo: Neural Mesh Models of Contrastive Features for Robust 3D Pose Estimation (ICLR 2021) `_
* `Robust Category-Level 6D Pose Estimation with Coarse-to-Fine Rendering of Neural Features (ECCV 2022) `_
Acknowledgements
----------------
In this project, we borrow codes from several other repos:
* :code:`NeMo` by Angtian Wang in `Angtian/NeMo `_
* :code:`DMTet` by NVIDIA in `nv-tlabs/GET3D `_
* :code:`torch_utils` by NVIDIA in `nv-tlabs/GET3D `_
* :code:`uni_rep` by NVIDIA in `nv-tlabs/GET3D `_
* :code:`dnnlib` by NVIDIA in `nv-tlabs/GET3D `_
Owner
- Name: Wufei Ma
- Login: wufeim
- Kind: user
- Location: Baltimore, MD
- Company: Johns Hopkins University
- Repositories: 4
- Profile: https://github.com/wufeim
Machine learning and computer vision.
Citation (CITATION.cff)
cff-version: 1.2.0 message: "If you use this software, please cite it as below." authors: - family-names: "Ma" given-names: "Wufei" title: "Neural Mesh Models for 3D Reasoning" version: 1.0.0 date-released: 2022-12 url: "https://github.com/wufeim/NeMo"
GitHub Events
Total
- Watch event: 1
Last Year
- Watch event: 1
Packages
- Total packages: 1
-
Total downloads:
- pypi 4 last-month
- Total dependent packages: 0
- Total dependent repositories: 0
- Total versions: 1
- Total maintainers: 1
pypi.org: neural-mesh-model
Neural mesh models for 3D reasoning.
- Homepage: https://github.com/wufeim/NeMo
- Documentation: https://neural-mesh-model.readthedocs.io/
- License: MIT license
-
Latest release: 1.0.0
published about 3 years ago
Rankings
Dependent packages count: 6.6%
Average: 28.4%
Dependent repos count: 30.6%
Downloads: 48.1%
Maintainers (1)
Last synced:
6 months ago
Dependencies
cu_layers/setup.py
pypi
setup.py
pypi
environment.yml
conda
- _libgcc_mutex 0.1
- _openmp_mutex 5.1
- abseil-cpp 20211102.0
- aiohttp 3.8.1
- aiosignal 1.3.1
- arrow-cpp 8.0.0
- async-timeout 4.0.2
- attrs 22.1.0
- aws-c-common 0.4.57
- aws-c-event-stream 0.1.6
- aws-checksums 0.1.9
- aws-sdk-cpp 1.8.185
- blas 1.0
- blosc 1.21.0
- boost-cpp 1.70.0
- bottleneck 1.3.5
- brotli 1.0.9
- brotli-bin 1.0.9
- brotlipy 0.7.0
- brunsli 0.1
- bzip2 1.0.8
- c-ares 1.18.1
- ca-certificates 2022.10.11
- certifi 2022.9.24
- cffi 1.15.1
- cfitsio 3.470
- charls 2.2.0
- charset-normalizer 2.0.4
- click 8.1.3
- cloudpickle 2.0.0
- colorama 0.4.6
- cryptography 38.0.1
- cudatoolkit 10.2.89
- cycler 0.11.0
- cytoolz 0.12.0
- dask-core 2022.7.0
- dataclasses 0.8
- datasets 2.7.1
- dbus 1.13.18
- dill 0.3.6
- expat 2.4.9
- ffmpeg 4.3
- fftw 3.3.9
- filelock 3.8.2
- flit-core 3.6.0
- fontconfig 2.14.1
- fonttools 4.25.0
- freetype 2.12.1
- frozenlist 1.3.3
- fsspec 2022.11.0
- fvcore 0.1.5.post20221122
- gflags 2.2.2
- giflib 5.2.1
- glib 2.69.1
- glog 0.6.0
- gmp 6.2.1
- gnutls 3.6.15
- grpc-cpp 1.46.1
- gst-plugins-base 1.14.0
- gstreamer 1.14.0
- huggingface_hub 0.11.1
- icu 58.2
- idna 3.4
- imagecodecs 2021.8.26
- imageio 2.19.3
- importlib-metadata 5.1.0
- importlib_metadata 5.1.0
- intel-openmp 2021.4.0
- iopath 0.1.9
- joblib 1.2.0
- jpeg 9e
- jxrlib 1.1
- kiwisolver 1.4.2
- krb5 1.19.2
- lame 3.100
- lcms2 2.12
- ld_impl_linux-64 2.38
- lerc 3.0
- libaec 1.0.4
- libbrotlicommon 1.0.9
- libbrotlidec 1.0.9
- libbrotlienc 1.0.9
- libclang 10.0.1
- libcurl 7.86.0
- libdeflate 1.8
- libedit 3.1.20210910
- libev 4.33
- libevent 2.1.12
- libffi 3.4.2
- libgcc-ng 11.2.0
- libgfortran-ng 11.2.0
- libgfortran5 11.2.0
- libgomp 11.2.0
- libiconv 1.16
- libidn2 2.3.2
- libllvm10 10.0.1
- libnghttp2 1.46.0
- libpng 1.6.37
- libpq 12.9
- libprotobuf 3.20.1
- libssh2 1.10.0
- libstdcxx-ng 11.2.0
- libtasn1 4.16.0
- libthrift 0.15.0
- libtiff 4.4.0
- libunistring 0.9.10
- libwebp 1.2.4
- libwebp-base 1.2.4
- libxcb 1.15
- libxkbcommon 1.0.1
- libxml2 2.9.14
- libxslt 1.1.35
- libzopfli 1.0.3
- locket 1.0.0
- lz4-c 1.9.3
- matplotlib 3.5.3
- matplotlib-base 3.5.3
- mkl 2021.4.0
- mkl-service 2.4.0
- mkl_fft 1.3.1
- mkl_random 1.2.2
- multidict 6.0.2
- multiprocess 0.70.12.2
- munkres 1.1.4
- ncurses 6.3
- nettle 3.7.3
- networkx 2.8.4
- nspr 4.33
- nss 3.74
- numexpr 2.8.4
- numpy 1.23.4
- numpy-base 1.23.4
- nvidiacub 1.10.0
- openh264 2.1.1
- openjpeg 2.4.0
- openssl 1.1.1s
- orc 1.7.4
- packaging 21.3
- pandas 1.5.2
- partd 1.2.0
- pcre 8.45
- pillow 9.2.0
- pip 22.2.2
- ply 3.11
- portalocker 2.6.0
- pyarrow 8.0.0
- pycparser 2.21
- pyopenssl 22.0.0
- pyparsing 3.0.9
- pyqt 5.15.7
- pyqt5-sip 12.11.0
- pysocks 1.7.1
- python 3.9.15
- python-dateutil 2.8.2
- python-xxhash 3.0.0
- python_abi 3.9
- pytorch 1.12.0
- pytorch-mutex 1.0
- pytorch3d 0.7.1
- pytz 2022.6
- pywavelets 1.4.1
- pyyaml 6.0
- qt-main 5.15.2
- qt-webengine 5.15.9
- qtwebkit 5.212
- re2 2022.04.01
- readline 8.2
- regex 2022.7.9
- requests 2.28.1
- responses 0.18.0
- sacremoses 0.0.53
- scikit-image 0.19.3
- scipy 1.9.3
- setuptools 65.5.0
- sip 6.6.2
- six 1.16.0
- snappy 1.1.9
- sqlite 3.40.0
- tabulate 0.9.0
- termcolor 2.1.1
- tifffile 2021.7.2
- timm 0.6.12
- tk 8.6.12
- tokenizers 0.11.4
- toml 0.10.2
- toolz 0.12.0
- torchaudio 0.12.0
- torchvision 0.13.0
- tornado 6.2
- tqdm 4.64.1
- transformers 4.24.0
- typing-extensions 4.4.0
- typing_extensions 4.4.0
- tzdata 2022g
- urllib3 1.26.12
- utf8proc 2.6.1
- wheel 0.37.1
- xxhash 0.8.0
- xz 5.2.8
- yacs 0.1.8
- yaml 0.2.5
- yarl 1.7.2
- zfp 0.5.5
- zipp 3.11.0
- zlib 1.2.13
- zstd 1.5.2