https://github.com/chris10m/eventego3d_plus_plus

EventEgo3D++: 3D Human Motion Capture from a Head-Mounted Event Camera [IJCV]

https://github.com/chris10m/eventego3d_plus_plus

Science Score: 36.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
  • DOI references
    Found 2 DOI reference(s) in README
  • Academic publication links
    Links to: arxiv.org, springer.com
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (8.6%) to scientific vocabulary

Keywords

3d-pose-estimation augmented-reality egocentric-pose-estimation egocentric-vision event-camera human-pose-estimation real-time-pose-estimation
Last synced: 6 months ago · JSON representation

Repository

EventEgo3D++: 3D Human Motion Capture from a Head-Mounted Event Camera [IJCV]

Basic Info
Statistics
  • Stars: 6
  • Watchers: 2
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Topics
3d-pose-estimation augmented-reality egocentric-pose-estimation egocentric-vision event-camera human-pose-estimation real-time-pose-estimation
Created 12 months ago · Last pushed 8 months ago
Metadata Files
Readme License

README.md

EventEgo3D++: 3D Human Motion Capture from a Head-Mounted Event Camera [IJCV]

Christen Millerdurai1,2, Hiroyasu Akada1, Jian Wang1, Diogo Luvizon1, Alain Pagani2, Didier Stricker2, Christian Theobalt1, Vladislav Golyanik1 1 Max Planck Institute for Informatics, SIC         2 DFKI Augmented Vision

Official PyTorch implementation

Project page | arXiv | IJCV

EventEgo3D

Abstract

Monocular egocentric 3D human motion capture remains a significant challenge, particularly under conditions of low lighting and fast movements, which are common in head-mounted device applications. Existing methods that rely on RGB cameras often fail under these conditions. To address these limitations, we introduce EventEgo3D++, the first approach that leverages a monocular event camera with a fisheye lens for 3D human motion capture. Event cameras excel in high-speed scenarios and varying illumination due to their high temporal resolution, providing reliable cues for accurate 3D human motion capture. EventEgo3D++ leverages the LNES representation of event streams to enable precise 3D reconstructions. We have also developed a mobile head-mounted device (HMD) prototype equipped with an event camera, capturing a comprehensive dataset that includes real event observations from both controlled studio environments and in-the-wild settings, in addition to a synthetic dataset. Additionally, to provide a more holistic dataset, we include allocentric RGB streams that offer different perspectives of the HMD wearer, along with their corresponding SMPL body model. Our experiments demonstrate that EventEgo3D++ achieves superior 3D accuracy and robustness compared to existing solutions, even in challenging conditions. Moreover, our method supports real-time 3D pose updates at a rate of 140Hz. This work is an extension of the EventEgo3D approach (CVPR 2024) and further advances the state of the art in egocentric 3D human motion capture

Advantages of Event Based Vision

High Speed Motion | Low Light Performance
:-------------------------:|:-------------------------:| | High Speed Motion | Low Light Performance |

Method

EventEgo3D

Usage


Installation

Clone the repository bash git clone https://github.com/Chris10M/EventEgo3D_plus_plus.git cd EventEgo3D_plus_plus

Dependencies

Create a conda enviroment from the file bash conda env create -f EventEgo3D.yml Next, install ocam_python using pip bash pip3 install git+https://github.com/Chris10M/ocam_python.git

Pretrained Models

The pretrained models for EE3D-S, EE3D-R and EE3D-W can be downloaded from

Please place the models in the following folder structure.

```bash EventEgo3Dplusplus | └── savedmodels | └── EE3D-Spretrainedweights.pth └── EE3DRfinetunedweights.pth └── EE3DWfinetuned_weights.pth

```

Datasets

The datasets can obtained by executing the files in dataset_scripts. For detailed information, refer here.

Training

For training, ensure EE3D-S, EE3D-R, EE3D-W and EE3D[BG-AUG] are present. The batch size and checkpoint path can be specified with the following environment variables, BATCH_SIZE and CHECKPOINT_PATH.

bash python train.py

Evaluation

EE3D-S

For evaluation, ensure EE3D-S Test is present. Please run,

bash python evaluate_ee3d_s.py

The provided pretrained checkpoint gives us an accuracy of,

| Arch | HeadMPJPE | NeckMPJPE | RightshoulderMPJPE | RightelbowMPJPE | RightwristMPJPE | LeftshoulderMPJPE | LeftelbowMPJPE | LeftwristMPJPE | RighthipMPJPE | RightkneeMPJPE | RightankleMPJPE | RightfootMPJPE | LefthipMPJPE | LeftkneeMPJPE | LeftankleMPJPE | LeftfootMPJPE | MPJPE | HeadPAMPJPE | NeckPAMPJPE | RightshoulderPAMPJPE | RightelbowPAMPJPE | RightwristPAMPJPE | LeftshoulderPAMPJPE | LeftelbowPAMPJPE | LeftwristPAMPJPE | RighthipPAMPJPE | RightkneePAMPJPE | RightanklePAMPJPE | RightfootPAMPJPE | LefthipPAMPJPE | LeftkneePAMPJPE | LeftanklePAMPJPE | LeftfootPAMPJPE | PAMPJPE | |---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---| | EgoHPE | 18.794 | 20.629 | 34.370 | 62.688 | 87.136 | 36.535 | 73.797 | 107.610 | 73.904 | 116.881 | 176.932 | 191.418 | 73.927 | 120.475 | 186.601 | 197.100 | 98.675 | 35.090 | 32.134 | 35.672 | 61.661 | 84.088 | 36.707 | 59.447 | 90.251 | 52.273 | 75.313 | 97.924 | 109.323 | 51.162 | 77.778 | 98.785 | 104.684 | 68.893 |

EE3D-R

For evaluation, ensure EE3D-R is present. Please run,

bash python evaluate_ee3d_r.py

The provided pretrained checkpoint gives us an accuracy of,

| Arch | walkMPJPE | crouchMPJPE | pushupMPJPE | boxingMPJPE | kickMPJPE | danceMPJPE | inter. with envMPJPE | crawlMPJPE | sportsMPJPE | jumpMPJPE | MPJPE | walkPAMPJPE | crouchPAMPJPE | pushupPAMPJPE | boxingPAMPJPE | kickPAMPJPE | dancePAMPJPE | inter. with envPAMPJPE | crawlPAMPJPE | sportsPAMPJPE | jumpPAMPJPE | PAMPJPE | |---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---| | EgoHPE | 68.673 | 157.415 | 88.633 | 123.567 | 102.313 | 84.955 | 95.733 | 109.378 | 94.898 | 95.935 | 102.150 | 50.060 | 100.759 | 66.288 | 94.516 | 84.264 | 66.906 | 68.201 | 75.726 | 72.233 | 75.831 | 75.479 |

EE3D-W

For evaluation, ensure EE3D-W is present. Please run,

bash python evaluate_ee3d_w.py

The provided pretrained checkpoint gives us an accuracy of,

| Arch | walkMPJPE | crouchMPJPE | pushupMPJPE | boxingMPJPE | kickMPJPE | danceMPJPE | inter. with envMPJPE | crawlMPJPE | sportsMPJPE | jumpMPJPE | MPJPE | walkPAMPJPE | crouchPAMPJPE | pushupPAMPJPE | boxingPAMPJPE | kickPAMPJPE | dancePAMPJPE | inter. with envPAMPJPE | crawlPAMPJPE | sportsPAMPJPE | jumpPAMPJPE | PAMPJPE | |---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---| | EgoHPE | 164.634 | 160.878 | 171.486 | 145.806 | 172.317 | 163.608 | 164.298 | 151.324 | 193.632 | 173.872 | 166.185 | 93.441 | 96.686 | 105.231 | 69.619 | 89.755 | 97.718 | 90.325 | 85.122 | 104.570 | 98.185 | 93.065 |

Citation

If you find this code useful for your research, please cite our paper: @article{eventegoplusplus, author={Millerdurai, Christen and Akada, Hiroyasu and Wang, Jian and Luvizon, Diogo and Pagani, Alain and Stricker, Didier and Theobalt, Christian and Golyanik, Vladislav}, title={EventEgo3D++: 3D Human Motion Capture from A Head-Mounted Event Camera}, journal={International Journal of Computer Vision (IJCV)}, year={2025}, month={Jun}, day={11}, issn={1573-1405}, doi={10.1007/s11263-025-02489-1}, }

License

EventEgo3D++ is under CC-BY-NC 4.0 license. The license also applies to the pre-trained models.

Acknowledgements

The code is partially adapted from here.

Owner

  • Name: Christen Millerdurai
  • Login: Chris10M
  • Kind: user

PhD & Researcher @ AV DFKI-Kaiserslautern.

GitHub Events

Total
  • Watch event: 6
  • Push event: 1
  • Public event: 1
Last Year
  • Watch event: 6
  • Push event: 1
  • Public event: 1