hardvs
[AAAI-2024] HARDVS: Revisiting Human Activity Recognition with Dynamic Vision Sensors
Science Score: 54.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (10.9%) to scientific vocabulary
Keywords
Repository
[AAAI-2024] HARDVS: Revisiting Human Activity Recognition with Dynamic Vision Sensors
Basic Info
Statistics
- Stars: 41
- Watchers: 4
- Forks: 4
- Open Issues: 7
- Releases: 0
Topics
Metadata Files
README.md
**HARDVS: Revisiting Human Activity Recognition with Dynamic Vision Sensors**
------
Paper
Wang, Xiao and Wu, Zongzhen and Jiang, Bo and Bao, Zhimin and Zhu, Lin and Li, Guoqi and Wang, Yaowei and Tian, Yonghong. "HARDVS: Revisiting Human Activity Recognition with Dynamic Vision Sensors." arXiv preprint arXiv:2211.09648 (2022). [arXiv] [Demovideo] [Poster]
Abstract
The main streams of human activity recognition (HAR) algorithms are developed based on RGB cameras which are suffered from illumination, fast motion, privacy-preserving, and large energy consumption. Meanwhile, the biologically inspired event cameras attracted great interest due to their unique features, such as high dynamic range, dense temporal but sparse spatial resolution, low latency, low power, etc. As it is a newly arising sensor, even there is no realistic large-scale dataset for HAR. Considering its great practical value, in this paper, we propose a large-scale benchmark dataset to bridge this gap, termed HARDVS, which contains 300 categories and more than 100K event sequences. We evaluate and report the performance of multiple popular HAR algorithms, which provide extensive baselines for future works to compare. More importantly, we propose a novel spatial-temporal feature learning and fusion framework, termed ESTF, for event stream based human activity recognition. It first projects the event streams into spatial and temporal embeddings using StemNet, then, encodes and fuses the dual-view representations using Transformer networks. Finally, the dual features are concatenated and fed into a classification head for activity prediction. Extensive experiments on multiple datasets fully validated the effectiveness of our model.
News
- :fire: [2023.12.09] Our paper is accepted by AAAI-2024 !!!
- :fire: [2023.05.29] The class label (i.e., category name) is available at [HARDVS300class.txt]
- :fire: [2022.12.14] HARDVS dataset is integrated into the SNN toolkit [SpikingJelly]
Demo Videos
A demo video for the HARDVS dataset can be found by clicking the image below:
Video Tutorial for this work can be found by clicking the image below:
Representative samples of HARDVS can be found below:
Dataset Download

Download from Baidu Disk:
[Event Images] 链接:https://pan.baidu.com/s/1OhlhOBHY91W2SwE6oWjDwA?pwd=1234 提取码:1234 [Compact Event file] 链接:https://pan.baidu.com/s/1iw214Aj5ugN-arhuxjmfOw?pwd=1234 提取码:1234 [RGB Event Images] 链接:https://pan.baidu.com/s/1w-z86PH7mGY0CqVBj_MpNA?pwd=1234 提取码:1234 [Raw Event file] To be updatedDownload from DropBox:
To be updated ...
Environment
conda create -n event python=3.8 pytorch=1.10 cudatoolkit=11.3 torchvision -c pytorch -y
conda activate event
pip3 install openmim
mim install mmcv-full
mim install mmdet # optional
mim install mmpose # optional
pip3 install -e .
Details of each package:

Our Proposed Approach
An overview of our proposed ESTF framework for event-based human action recognition. It transforms the event streams into spatial and temporal tokens and learns the dual features using multi-head self-attention layers. Further, a FusionFormer is proposed to realize message passing between the spatial and temporal features. The aggregated features are added with dual features as the input for subsequent TF and SF blocks, respectively. The outputs will be concatenated and fed into MLP layers for action prediction.
Train & Test & Evaluation
```
train
CUDAVISIBLEDEVICES=0 python tools/train.py configs/recognition/hardvsESTF/hardvsESTF.py --work-dir pathtocheckpoint --validate --seed 0 --deterministic --gpu-ids=0
test
CUDAVISIBLEDEVICES=0 python tools/test.py configs/recognition/hardvsESTF/hardvsESTF.py pathtocheckpoint --eval topkaccuracy ```
Citation
If you find this work useful for your research, please cite the following paper and give us a :star2:.
bibtex
@article{wang2022hardvs,
title={HARDVS: Revisiting Human Activity Recognition with Dynamic Vision Sensors},
author={Wang, Xiao and Wu, Zongzhen and Jiang, Bo and Bao, Zhimin and Zhu, Lin and Li, Guoqi and Wang, Yaowei and Tian, Yonghong},
journal={arXiv preprint arXiv:2211.09648},
url={https://arxiv.org/abs/2211.09648},
year={2022}
}
Acknowledgement and Other Useful Materials
- MMAction2: https://github.com/open-mmlab/mmaction2
- SpikingJelly: https://github.com/fangwei123456/spikingjelly
Owner
- Name: Event-AHU
- Login: Event-AHU
- Kind: organization
- Email: xiaowang@ahu.edu.cn
- Location: China
- Website: https://wangxiao5791509.github.io/
- Repositories: 23
- Profile: https://github.com/Event-AHU
Research on CV, with a focus on Event based Vision. Lead by@github.com/wangxiao5791509
Citation (CITATION.cff)
cff-version: 1.2.0 message: "If you use this software, please cite it as below." authors: - name: "MMAction2 Contributors" title: "OpenMMLab's Next Generation Video Understanding Toolbox and Benchmark" date-released: 2020-07-21 url: "https://github.com/open-mmlab/mmaction2" license: Apache-2.0
GitHub Events
Total
- Issues event: 5
- Watch event: 13
- Delete event: 1
- Issue comment event: 3
- Push event: 46
- Fork event: 2
- Create event: 2
Last Year
- Issues event: 5
- Watch event: 13
- Delete event: 1
- Issue comment event: 3
- Push event: 46
- Fork event: 2
- Create event: 2
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 4
- Total pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Total issue authors: 4
- Total pull request authors: 0
- Average comments per issue: 0.0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 4
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 4
- Pull request authors: 0
- Average comments per issue: 0.0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- Asaasx (2)
- junkangfang (1)
- NielsRogge (1)
- ssp789 (1)
- Tianbo-Pan (1)
- weidel-p (1)
- potentialming (1)
- wanzengy (1)
- IceIce1ce (1)
- betacatZ (1)
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
- pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
- Pillow *
- decord >=0.4.1
- einops *
- matplotlib *
- numpy *
- opencv-contrib-python *
- scipy *
- torch >=1.3
- docutils ==0.16.0
- einops *
- markdown *
- myst-parser *
- opencv-python *
- scipy *
- sphinx ==4.0.2
- sphinx_copybutton *
- sphinx_markdown_tables *
- sphinx_rtd_theme ==0.5.2
- mmcv-full >=1.3.1
- PyTurboJPEG *
- av *
- imgaug *
- librosa *
- lmdb *
- moviepy *
- onnx *
- onnxruntime *
- packaging *
- pims *
- timm *
- mmcv *
- titlecase *
- torch *
- torchvision *
- coverage * test
- flake8 * test
- interrogate * test
- isort ==4.3.21 test
- protobuf <=3.20.1 test
- pytest * test
- pytest-runner * test
- xdoctest >=0.10.0 test
- yapf * test