xrmocap
OpenXRLab Multi-view Motion Capture Toolbox and Benchmark
Science Score: 54.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: arxiv.org, ieee.org -
○Committers with academic emails
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (14.9%) to scientific vocabulary
Keywords
Repository
OpenXRLab Multi-view Motion Capture Toolbox and Benchmark
Basic Info
- Host: GitHub
- Owner: openxrlab
- License: other
- Language: Python
- Default Branch: main
- Homepage: https://xrmocap.readthedocs.io/
- Size: 1.54 MB
Statistics
- Stars: 384
- Watchers: 10
- Forks: 44
- Open Issues: 9
- Releases: 4
Topics
Metadata Files
README.md
Introduction
English | 简体中文
XRMoCap is an open-source PyTorch-based codebase for the use of multi-view motion capture. It is a part of the OpenXRLab project.
If you are interested in single-view motion capture, please refer to mmhuman3d for more details.
https://user-images.githubusercontent.com/26729379/187710195-ba4660ce-c736-4820-8450-104f82e5cc99.mp4
A detailed introduction can be found in introduction.md.
Major Features
- Support popular multi-view motion capture methods for single person and multiple people
XRMoCap reimplements SOTA multi-view motion capture methods, ranging from single person to multiple people. It supports an arbitrary number of calibrated cameras greater than 2, and provides effective strategies to automatically select cameras.
- Support keypoint-based and parametric human model-based multi-view motion capture algorithms
XRMoCap supports two mainstream motion representations, keypoints3d and SMPL(-X) model, and provides tools for conversion and optimization between them.
- Integrate optimization-based and learning-based methods into one modular framework
XRMoCap decomposes the framework into several components, based on which optimization-based and learning-based methods are integrated into one framework. Users can easily prototype a customized multi-view mocap pipeline by choosing different components in configs.
News
- 2022-12-21: XRMoCap v0.7.0 is released. Major updates include:
- Add mviewmpersonend2end_estimator for learning-based method
- Add SMPLX support and allow smpldata initiation in `mviewspersonsmplestimator`
- Add multiple optimizers, detailed joint weights and priors, grad clipping for better SMPLify results
- Add mediapipe_estimator for human keypoints2d perception
- 2022-10-14: XRMoCap v0.6.0 is released. Major updates include:
- Add 4D Association Graph, the first Python implementation to reproduce this algorithm
- Add Multi-view multi-person top-down smpl estimation
- Add reprojection error point selector
- 2022-09-01: XRMoCap v0.5.0 is released. Major updates include:
- Support HuMMan Mocap toolchain for multi-view single person SMPL estimation
- Reproduce MvP, a deep-learning-based SOTA for multi-view multi-human 3D pose estimation
- Reproduce MVPose (single frame) and MVPose (temporal tracking and filtering), two optimization-based methods for multi-view multi-human 3D pose estimation
- Support SMPLify, SMPLifyX, SMPLifyD and SMPLifyXD
Benchmark
More details can be found in benchmark.md.
Supported methods:
(click to collapse)
- [x] [SMPLify](https://smplify.is.tue.mpg.de/) (ECCV'2016) - [x] [SMPLify-X](https://smpl-x.is.tue.mpg.de/) (CVPR'2019) - [x] [MVPose (Single frame)](https://zju3dv.github.io/mvpose/) (CVPR'2019) - [x] [MVPose (Temporal tracking and filtering)](https://zju3dv.github.io/mvpose/) (T-PAMI'2021) - [x] [Shape-aware 3D Pose Optimization](https://ait.ethz.ch/projects/2021/multi-human-pose/) (ICCV'2019) - [x] [MvP](https://arxiv.org/pdf/2111.04076.pdf) (NeurIPS'2021) - [x] [HuMMan MoCap](https://caizhongang.github.io/projects/HuMMan/) (ECCV'2022) - [x] [4D Association Graph](http://www.liuyebin.com/4dassociation/) (CVPR'2020)Supported datasets:
(click to collapse)
- [x] [Campus](https://campar.in.tum.de/Chair/MultiHumanPose) (CVPR'2014) - [x] [Shelf](https://campar.in.tum.de/Chair/MultiHumanPose) (CVPR'2014) - [x] [CMU Panoptic](http://domedb.perception.cs.cmu.edu/) (ICCV'2015) - [x] [4D Association](https://github.com/zhangyux15/multiview_human_dataset) (CVPR'2020)Getting Started
Please see getting_started.md for the basic usage of XRMoCap.
License
The license of our codebase is Apache-2.0. Note that this license only applies to code in our library, the dependencies of which are separate and individually licensed. We would like to pay tribute to open-source implementations to which we rely on. Please be aware that using the content of dependencies may affect the license of our codebase. Refer to LICENSE to view the full license.
Citation
If you find this project useful in your research, please consider cite:
bibtex
@misc{xrmocap,
title={OpenXRLab Multi-view Motion Capture Toolbox and Benchmark},
author={XRMoCap Contributors},
howpublished = {\url{https://github.com/openxrlab/xrmocap}},
year={2022}
}
Contributing
We appreciate all contributions to improve XRMoCap. Please refer to CONTRIBUTING.md for the contributing guideline.
Acknowledgement
XRMoCap is an open source project that is contributed by researchers and engineers from both the academia and the industry. We appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedbacks. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new models.
Projects in OpenXRLab
- XRPrimer: OpenXRLab foundational library for XR-related algorithms.
- XRSLAM: OpenXRLab Visual-inertial SLAM Toolbox and Benchmark.
- XRSfM: OpenXRLab Structure-from-Motion Toolbox and Benchmark.
- XRLocalization: OpenXRLab Visual Localization Toolbox and Server.
- XRMoCap: OpenXRLab Multi-view Motion Capture Toolbox and Benchmark.
- XRMoGen: OpenXRLab Human Motion Generation Toolbox and Benchmark.
- XRNeRF: OpenXRLab Neural Radiance Field (NeRF) Toolbox and Benchmark.
- XRFeitoria: OpenXRLab Synthetic Data Rendering Toolbox.
- XRViewer: OpenXRLab Data Visualization Toolbox.
- XRTailor: OpenXRLab GPU Cloth Simulator.
Owner
- Name: OpenXRLab
- Login: openxrlab
- Kind: organization
- Website: https://openxrlab.org.cn/
- Twitter: OpenXRLab
- Repositories: 11
- Profile: https://github.com/openxrlab
Citation (CITATION.cff)
cff-version: 1.2.0 message: "If you use this software, please cite it as below." authors: - name: "XRMoCap Contributors" title: "XRMoCap: OpenXRLab Multi-view Motion Capture Toolbox and Benchmark" date-released: 2022-09-01 url: "https://github.com/openxrlab/xrmocap" license: Apache-2.0
GitHub Events
Total
- Issues event: 23
- Watch event: 40
- Issue comment event: 1
- Push event: 1
- Pull request review comment event: 1
- Pull request review event: 3
- Pull request event: 4
- Fork event: 4
Last Year
- Issues event: 23
- Watch event: 40
- Issue comment event: 1
- Push event: 1
- Pull request review comment event: 1
- Pull request review event: 3
- Pull request event: 4
- Fork event: 4
Committers
Last synced: almost 3 years ago
All Time
- Total Commits: 81
- Total Committers: 12
- Avg Commits per committer: 6.75
- Development Distribution Score (DDS): 0.667
Top Committers
| Name | Commits | |
|---|---|---|
| LazyBusyYang | g****n@o****m | 27 |
| wqyin | 3****n@u****m | 12 |
| Lei Yang | y****v@g****m | 11 |
| LazyBusyYang | g****3@s****m | 7 |
| jiaqiAA | 6****A@u****m | 7 |
| jiaqiAA | l****0@g****m | 5 |
| Kanghao Chen | 2****9@q****m | 4 |
| wqyin | w****5@g****m | 3 |
| Yamato-01 | 8****1@u****m | 2 |
| tonylu0728 | 4****8@u****m | 1 |
| Kanghao Chen | k****7@g****m | 1 |
| WEI CHEN | 7****b@u****m | 1 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 86
- Total pull requests: 77
- Average time to close issues: 2 months
- Average time to close pull requests: 6 days
- Total issue authors: 34
- Total pull request authors: 14
- Average comments per issue: 1.37
- Average comments per pull request: 0.94
- Merged pull requests: 60
- Bot issues: 35
- Bot pull requests: 0
Past Year
- Issues: 15
- Pull requests: 3
- Average time to close issues: 3 months
- Average time to close pull requests: 3 days
- Issue authors: 4
- Pull request authors: 2
- Average comments per issue: 0.13
- Average comments per pull request: 0.0
- Merged pull requests: 3
- Bot issues: 12
- Bot pull requests: 0
Top Authors
Issue Authors
- github-actions[bot] (35)
- aichunling0418 (10)
- LazyBusyYang (5)
- Taylorminer (3)
- patrickESM (2)
- Billccx (2)
- 3huo (2)
- jiku100 (1)
- rorrewang (1)
- Dragon2938734 (1)
- Young2647 (1)
- YaoBeiji (1)
- Charrrrrlie (1)
- canghaiyunfan (1)
- yl-1993 (1)
Pull Request Authors
- LazyBusyYang (21)
- wqyin (16)
- jiaqiAA (11)
- yl-1993 (7)
- KHao123 (7)
- Yamato-01 (3)
- WYK96 (3)
- ghost (2)
- Era-Dorta-TU-Delft (2)
- Wei-Chen-hub (1)
- aichunling0418 (1)
- danielrogel (1)
- caizhongang (1)
- tonylu0728 (1)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 1
-
Total downloads:
- pypi 18 last-month
- Total dependent packages: 0
- Total dependent repositories: 0
- Total versions: 5
- Total maintainers: 3
pypi.org: xrmocap
- Homepage: https://github.com/openxrlab/xrmocap
- Documentation: https://xrmocap.readthedocs.io/
- License: Apache License 2.0
-
Latest release: 0.8.0
published almost 3 years ago
Rankings
Maintainers (3)
Dependencies
- filterpy *
- numpy *
- opencv-python-headless *
- pre-commit *
- prettytable *
- scipy *
- smplx *
- tqdm *
- filterpy *
- h5py *
- numpy *
- pre-commit *
- prettytable *
- scipy *
- smplx *
- tqdm *
- coverage * test
- filterpy * test
- numpy * test
- pre-commit * test
- prettytable * test
- pytest * test
- scipy * test
- smplx * test
- tqdm * test
- actions-cool/issues-helper v3 composite
- actions/checkout v2 composite
- codecov/codecov-action v3 composite
- actions-cool/issues-helper v2.2.1 composite
- actions-cool/issues-helper v3 composite
- actions-cool/issues-helper v3 composite
- actions-cool/issues-helper v1.2 composite
- actions-cool/issues-helper v1.7 composite
- actions-cool/issues-helper v2.2.1 composite
- actions/checkout v2 composite
- actions/setup-python v2 composite
- actions/checkout v2 composite
- actions/setup-python v1 composite
- docutils ==0.16.0
- myst-parser *
- sphinx ==4.0.2
- sphinx-copybutton *
- sphinx_markdown_tables *
- sphinx_rtd_theme ==0.5.2
- mmcv *
- torch *
- torchvision *
- xrprimer *
- nvidia/cuda 11.7.0-cudnn8-devel-ubuntu18.04 build
- $INPUT_TAG latest build
- Flask-Caching *
- flask *
- flask-socketio *
- flask_api *
- flask_cors *
- simple-websocket *