soar

[ICCV 2023] Official implementation of paper "SOAR: Scene-debiasing Open-set Action Recognition".

https://github.com/yhzhai/soar

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org, scholar.google
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (9.4%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

[ICCV 2023] Official implementation of paper "SOAR: Scene-debiasing Open-set Action Recognition".

Basic Info
  • Host: GitHub
  • Owner: yhZhai
  • License: apache-2.0
  • Language: Python
  • Default Branch: main
  • Size: 1.06 MB
Statistics
  • Stars: 11
  • Watchers: 1
  • Forks: 0
  • Open Issues: 1
  • Releases: 0
Created over 2 years ago · Last pushed about 2 years ago
Metadata Files
Readme License Citation Support

README.md

SOAR: Scene-debiasing Open-set Action Recognition

featured

This repo contains the original PyTorch implementation of our paper:

SOAR: Scene-debiasing Open-set Action Recognition

Yuanhao Zhai, Ziyi Liu, Zhenyu Wu, Yi Wu, Chunluan Zhou, David Doermann, Junsong Yuan, and Gang Hua

University at Buffalo, Wormpex AI Research

ICCV 2023

1. Environment setup

Our project is developed upon MMAction2 v0.24.1, please follow their instruction to setup the environemtn.

2. Dataset preparation

Follow these instructions to setup the datasets

We provide pre-extracted scene feature and labels, and scene-distance-splitted subsets for the three datasets here (coming soon). Please place them in the data folder.

3. Training

Upon the original MMAction2 train and evaluation scripts, we wrote a simple script that combines the training and evalution tools/run.py.

For training and evaluating the whole SOAR model (require the pre-extracted scene label): shell python tools/run.py configs/recognition/i3d/i3d_r50_dense_32x2x1_50e_ucf101_rgb_weighted_ae_edl_dis.py --gpus 0,1,2,3

For the unsupervised version that does not require the scene label: shell python tools/run.py configs/recognition/i3d/i3d_r50_dense_32x2x1_50e_ucf101_rgb_ae_edl.py --gpus 0,1,2,3

4. Evaluation

Coming soon

Citation

If you find our work helpful, please considering citing our work.

bibtex @inproceedings{zhai2023soar, title={SOAR: Scene-debiasing Open-set Action Recognition}, author={Zhai, Yuanhao and Liu, Ziyi and Wu, Zhenyu and Wu, Yi and Zhou, Chunluan and Doermann, David and Yuan, Junsong and Hua, Gang}, booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision}, pages={10244--10254}, year={2023} }

TODO list

  • [ ] Upload pre-extract scene feature and scene label
  • [ ] Update scene-bias evaluation code and tutorial.

Acknowledgement

This project is developed heavily upon DEAR and MMAction2. We thank Wentao Bao @Cogito2012 for valuable discussion.

Owner

  • Name: Yuanhao Zhai
  • Login: yhZhai
  • Kind: user
  • Location: Buffalo, NY
  • Company: State University of New York at Buffalo

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
  - name: "MMAction2 Contributors"
title: "OpenMMLab's Next Generation Video Understanding Toolbox and Benchmark"
date-released: 2020-07-21
url: "https://github.com/open-mmlab/mmaction2"
license: Apache-2.0

GitHub Events

Total
  • Watch event: 2
Last Year
  • Watch event: 2

Dependencies

docker/Dockerfile docker
  • pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
docker/serve/Dockerfile docker
  • pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
requirements/build.txt pypi
  • Pillow *
  • decord >=0.4.1
  • einops *
  • matplotlib *
  • numpy *
  • opencv-contrib-python *
  • scipy *
  • torch >=1.3
requirements/docs.txt pypi
  • docutils ==0.16.0
  • einops *
  • markdown *
  • myst-parser *
  • opencv-python *
  • scipy *
  • sphinx ==4.0.2
  • sphinx_copybutton *
  • sphinx_markdown_tables *
  • sphinx_rtd_theme ==0.5.2
requirements/mminstall.txt pypi
  • mmcv-full >=1.3.1
requirements/optional.txt pypi
  • PyTurboJPEG *
  • av *
  • imgaug *
  • librosa *
  • lmdb *
  • moviepy *
  • onnx *
  • onnxruntime *
  • packaging *
  • pims *
  • timm *
requirements/readthedocs.txt pypi
  • mmcv *
  • titlecase *
  • torch *
  • torchvision *
requirements/tests.txt pypi
  • coverage * test
  • flake8 * test
  • interrogate * test
  • isort ==4.3.21 test
  • protobuf <=3.20.1 test
  • pytest * test
  • pytest-runner * test
  • xdoctest >=0.10.0 test
  • yapf * test
requirements.txt pypi
setup.py pypi
tools/data/gym/environment.yml pypi
  • decorator ==4.4.2
  • intel-openmp ==2019.0
  • joblib ==0.15.1
  • mkl ==2019.0
  • numpy ==1.18.4
  • olefile ==0.46
  • pandas ==1.0.3
  • python-dateutil ==2.8.1
  • pytz ==2020.1
  • six ==1.14.0
  • youtube-dl *