3dnbf
Official code base for the ICCV 2023 paper "3D-Aware Neural Body Fitting for Occlusion Robust 3D Human Pose Estimation"
Science Score: 44.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (9.1%) to scientific vocabulary
Repository
Official code base for the ICCV 2023 paper "3D-Aware Neural Body Fitting for Occlusion Robust 3D Human Pose Estimation"
Basic Info
- Host: GitHub
- Owner: edz-o
- License: apache-2.0
- Language: Python
- Default Branch: main
- Size: 1.8 MB
Statistics
- Stars: 55
- Watchers: 2
- Forks: 5
- Open Issues: 1
- Releases: 0
Metadata Files
README.md
3D-Aware Neural Body Fitting for Occlusion Robust 3D Human Pose Estimation
Introduction
We provide the config files for 3DNBF: 3D-Aware Neural Body Fitting for Occlusion Robust 3D Human Pose Estimation. The project is based on mmhuman3d codebase. Please also refer to mmhuman3d v0.5.0 if you have and confusion about the code.
```BibTeX
@Inproceedings{zhang2023nbf, author = {Zhang, Yi and Ji, Pengliang and Kortylewski, Adam and Wang, Angtian and Mei, Jieru and Yuille, Alan L}, title = {{3D-Aware Neural Body Fitting for Occlusion Robust 3D Human Pose Estimation}}, booktitle = {The IEEE/CVF International Conference on Computer Vision}, year = {2023} }
``` <!-- TOC -->
Installation
Please refer to install.md for installation.
Data Preparation
Fetch Data
Download data and unzip to $ROOT.
This includes pretrained models, preprocessed and other necessary files.
Body Model Preparation
- SMPL v1.0 is used in our experiments.
- Neutral model can be downloaded from SMPLify.
- All body models have to be renamed in
SMPL_{GENDER}.pklformat.
For example,mv basicModel_neutral_lbs_10_207_0_v1.0.0.pkl SMPL_NEUTRAL.pkl
- Jregressorextra.npy
- Jregressorh36m.npy
- smplmeanparams.npz
Download the above resources and arrange them in the following file structure:
text
mmhuman3d
├── mmhuman3d
├── docs
├── tests
├── tools
├── configs
└── data
└── body_models
├── J_regressor_extra.npy
├── J_regressor_h36m.npy
├── smpl_mean_params.npz
└── smpl
├── SMPL_FEMALE.pkl
├── SMPL_MALE.pkl
└── SMPL_NEUTRAL.pkl
Data preprocessing
Download the datasets from official websites. See original data preprocessing.
The final data/ folder should have this structure:
text
mmhuman3d
└── data
├── datasets
├── h36m
├── lspet
├── mpii
├── mpi_inf_3dhp
├── coco
├── pw3d
├── body_models
├── dataset_extras
├── pretrained
├── sample_params
├── static_fits
├── vposer_v1_0
└── preprocessed_datasets
├── eft_coco_all.npz
├── spin_mpi_inf_3dhp_train_new_correct.npz
├── eft_lspet.npz
├── eft_mpii.npz
├── spin_h36m_train_mosh.npz
├── ...
├── gmm_08.pkl
├── vertex_to_part.json
└── smpl_partSegmentation_mapping.pkl
Evaluation
Set the test_data in the config, and run the following command
shell
CUDA_VISIBLE_DEVICES=0,1,2,3 bash tools/dist_test.sh configs/3dnbf/resnet50_pare_w_coke_pw3d_step2.py exp/3dnbf/3dpw_advocc data/pretrained/3dnbf_r50.pth 4 --metrics pa-mpjpe mpjpe pckh
We provide a script to run all experiments,
shell
bash tools/run_all_tasks.sh
Visualization
To visualize the prediction, just set cfg.data.visualization.pipeline to vis_pipeline.
shell
CUDA_VISIBLE_DEVICES=0 python tools/visualize_predictions.py --config configs/3dnbf/resnet50_pare_w_coke_pw3d_step2.py --output_file /path/to/result_keypoints.json --outdir /path/to/visualization
Evaluation on 3DPW-Adv
First perform a sliding window testing using OccludedHumanImageDataset to wrap your test data in orig_cfg, e.g. configs/pare/resnet50_pare_pw3d.py. Set occ_size and occ_stride to be the same as in test_pipeline_occ. Evaluate on this dataset gives the info of the sample with largest error under different metrics for each image result_occ_info_{mpjpe|pa-mpjpe|pckh}.json. Set hparams.DATASET.occ_info_file to these files will reconstruct the adversarially placed occlusion dataset.
```python originaldataset = dict( type=datasettype, bodymodel=dict( type='GenderedSMPL', keypointsrc='h36m', keypointdst='h36m', modelpath='data/bodymodels/smpl', jointsregressor='data/bodymodels/Jregressorh36m.npy'), datasetname='pw3d', convention='h36m', dataprefix='data', pipeline=testpipelineocc, annfile='pw3dtestwkp2dds30op.npz', hparams=dict( DATASETSANDRATIOS='h36mmpiilspetcocompi-inf-3dhp0.350.050.050.20.35', FOCALLENGTH=5000.0, IMGRES=imgres, evalvisible_joints=True))
test=dict( type='OccludedHumanImageDataset', origcfg=originaldataset, # here occsize and occstride are only used to calculate ngrid # the actual occsize and occstride are set in testpipeline occsize=80, occstride=40, ) ```
python
test=dict(
type=dataset_type,
body_model=dict(
type='GenderedSMPL',
keypoint_src='h36m',
keypoint_dst='h36m',
model_path='data/body_models/smpl',
joints_regressor='data/body_models/J_regressor_h36m.npy'),
dataset_name='pw3d',
convention='smpl_49',
data_prefix='data',
pipeline=test_pipeline_occ,
ann_file='pw3d_test_w_kp2d_ds30_op.npz',
hparams=dict(
DATASETS_AND_RATIOS='h36m_mpii_lspet_coco_mpi-inf-3dhp_0.35_0.05_0.05_0.2_0.35',
FOCAL_LENGTH=5000.0,
IMG_RES=img_res,
eval_visible_joints=True,
# occ_info_file is the output of `OccludedHumanImageDataset`
occ_info_file='exp/pare/3dpw_test_ds30_occ80stride40_pare_r50_grid/result_occ_info_mpjpe.json'
)
),
Training
Training with multiple GPUs
First, train on EFTCOCO for 100 epochs.
shell
CUDA_VISIBLE_DEVICES=0,1,2,3 bash tools/dist_train.sh configs/3dnbf/resnet50_pare_w_coke_pw3d.py exp/3dnbf 4 --no-validate
Then, set load_from in resnet50_pare_w_coke_pw3d_step2.py to the checkpoint from the first stage and train on all datasets,
shell
CUDA_VISIBLE_DEVICES=0,1,2,3 bash tools/dist_train.sh configs/3dnbf/resnet50_pare_w_coke_pw3d_step2.py exp/3dnbf_stage2 4 --no-validate
Demo
Cropped Image Dataset
Place center cropped human images in data/datasets/demo and run tools/create_test_dataset.py to create dataset files which will be stored in data/preprocessed_datasets.
Use the following script to run 3DNBF. In the config file, set data.test.dataset_name and data.test.ann_file accordingly. Results will be saved to $WORK_DIR/result_keypoints.json.
You can run visualization afterwards.
shell
CUDA_VISIBLE_DEVICES=0 python tools/test.py --config configs/3dnbf/resnet50_pare_w_coke_pw3d_demo.py --work-dir WORK_DIR --checkpoint CHECKPOINT --skip_eval
Example run of our demo:
shell
python tools/create_test_dataset.py
CUDA_VISIBLE_DEVICES=0 python tools/test.py --config configs/3dnbf/resnet50_pare_w_coke_pw3d_demo.py --work-dir output --checkpoint data//pretrained/3dnbf_r50.pth --skip_eval
CUDA_VISIBLE_DEVICES=0 python tools/visualize_predictions.py --config configs/3dnbf/resnet50_pare_w_coke_pw3d_demo.py --output_file output/result_keypoints.json --outdir output/visualization
Owner
- Login: edz-o
- Kind: user
- Company: Johns Hopkins University
- Website: edz-o.github.io
- Repositories: 16
- Profile: https://github.com/edz-o
Citation (CITATION.cff)
cff-version: 1.2.0 message: "If you use this software, please cite it as below." authors: - name: "MMHuman3D Contributors" title: "MMHuman3D: OpenMMLab 3D Human Parametric Model Toolbox and Benchmark" date-released: 2021-12-01 url: "https://github.com/open-mmlab/mmhuman3d" license: Apache-2.0
GitHub Events
Total
- Watch event: 4
- Fork event: 1
Last Year
- Watch event: 4
- Fork event: 1
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 7
- Total pull requests: 0
- Average time to close issues: 2 days
- Average time to close pull requests: N/A
- Total issue authors: 3
- Total pull request authors: 0
- Average comments per issue: 2.0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 7
- Pull requests: 0
- Average time to close issues: 2 days
- Average time to close pull requests: N/A
- Issue authors: 3
- Pull request authors: 0
- Average comments per issue: 2.0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- GloryyrolG (4)
- hulonghua-devin (2)
- Shan924 (1)
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- actions/checkout v2 composite
- codecov/codecov-action v2 composite
- conda-incubator/setup-miniconda v2 composite
- actions/checkout v2 composite
- actions/setup-python v2 composite
- actions/checkout v2 composite
- actions/setup-python v1 composite
- nvidia/cuda 11.3.1-cudnn8-devel-ubuntu18.04 build
- docutils ==0.16.0
- myst-parser *
- sphinx ==4.0.2
- sphinx-copybutton *
- sphinx_markdown_tables *
- sphinx_rtd_theme ==0.5.2
- joblib *
- loguru *
- yacs *
- mmcv *
- torch *
- torchvision *
- Pillow *
- UVTextureConverter *
- albumentations *
- av *
- awscli *
- awscli_plugin_endpoint *
- boto3 *
- clean-fid *
- cmake *
- cython *
- diskcache *
- einops *
- face-alignment *
- gdown *
- imageio-ffmpeg *
- imutils *
- ipython *
- lmdb *
- lpips *
- nvidia-ml-py3 *
- opencv-contrib-python *
- pyglet *
- pynvml *
- pyrender *
- pyshtools *
- pytz *
- qimage2ndarray *
- requests *
- rsa *
- smplx *
- tensorboard *
- timm *
- trimesh *
- wandb *
- wget *
- astropy *
- cdflib <0.4.0
- chumpy *
- colormap *
- easydev *
- extension-helpers *
- h5py *
- ipdb *
- loguru *
- matplotlib *
- mmcv ==1.5.0
- numpy *
- opencv-python *
- pandas *
- pickle5 *
- plotly *
- scikit-image *
- scipy *
- smplx *
- tqdm *
- vedo *
- codecov * test
- flake8 * test
- interrogate * test
- isort ==4.3.21 test
- pytest * test
- xdoctest >=0.10.0 test
- yapf * test