-parkinson_detection_2sagcnatt
2sagcnatt
https://github.com/hamed-aghapanah/-parkinson_detection_2sagcnatt
Science Score: 36.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (10.0%) to scientific vocabulary
Keywords
Repository
2sagcnatt
Basic Info
Statistics
- Stars: 2
- Watchers: 1
- Forks: 1
- Open Issues: 0
- Releases: 0
Topics
Metadata Files
README.md
Introduction
MMAction2 is an open-source toolbox for video understanding based on PyTorch. It is a part of the HamedAghapanah project.
The master branch works with PyTorch 1.5+.

Action Recognition Results on Kinetics-400

Skeleton-base Action Recognition Results on NTU-RGB+D-120

Skeleton-based Spatio-Temporal Action Detection and Action Recognition Results on Kinetics-400

Spatio-Temporal Action Detection Results on AVA-2.1
Major Features
Modular design: We decompose a video understanding framework into different components. One can easily construct a customized video understanding framework by combining different modules.
Support four major video understanding tasks: MMAction2 implements various algorithms for multiple video understanding tasks, including action recognition, action localization, spatio-temporal action detection, and skeleton-based action detection. We support 27 different algorithms and 20 different datasets for the four major tasks.
Well tested and documented: We provide detailed documentation and API reference, as well as unit tests.
Installation
MMAction2 depends on PyTorch, MMCV, MMDetection (optional), and MMPose(optional). Below are quick steps for installation. Please refer to install.md for more detailed instruction.
shell
conda create -n open-mmlab python=3.8 pytorch=1.10 cudatoolkit=11.3 torchvision -c pytorch -y
conda activate open-mmlab
pip3 install openmim
mim install mmcv-full
mim install mmdet # optional
mim install mmpose # optional
git clone https://github.com/open-mmlab/mmaction2.git
cd mmaction2
pip3 install -e .
Get Started
Please see getting_started.md for the basic usage of MMAction2. There are also tutorials:
- learn about configs
- finetuning models
- adding new dataset
- designing data pipeline
- adding new modules
- exporting model to onnx
- customizing runtime settings
A Colab tutorial is also provided. You may preview the notebook here or directly run on Colab.
Supported Datasets
| Action Recognition | |||
| HMDB51 (Homepage) (ICCV'2011) | UCF101 (Homepage) (CRCV-IR-12-01) | ActivityNet (Homepage) (CVPR'2015) | Kinetics-[400/600/700] (Homepage) (CVPR'2017) |
| SthV1 (Homepage) (ICCV'2017) | SthV2 (Homepage) (ICCV'2017) | Diving48 (Homepage) (ECCV'2018) | Jester (Homepage) (ICCV'2019) |
| Moments in Time (Homepage) (TPAMI'2019) | Multi-Moments in Time (Homepage) (ArXiv'2019) | HVU (Homepage) (ECCV'2020) | OmniSource (Homepage) (ECCV'2020) |
| FineGYM (Homepage) (CVPR'2020) | |||
| Action Localization | |||
| THUMOS14 (Homepage) (THUMOS Challenge 2014) | ActivityNet (Homepage) (CVPR'2015) | ||
| Spatio-Temporal Action Detection | |||
| UCF101-24* (Homepage) (CRCV-IR-12-01) | JHMDB* (Homepage) (ICCV'2015) | AVA (Homepage) (CVPR'2018) | |
| Skeleton-based Action Recognition | |||
| PoseC3D-FineGYM (Homepage) (ArXiv'2021) | PoseC3D-NTURGB+D (Homepage) (ArXiv'2021) | PoseC3D-UCF101 (Homepage) (ArXiv'2021) | PoseC3D-HMDB51 (Homepage) (ArXiv'2021) |
Datasets marked with * are not fully supported yet, but related dataset preparation steps are provided. A summary can be found on the Supported Datasets page.
Benchmark
To demonstrate the efficacy and efficiency of our framework, we compare MMAction2 with some other popular frameworks and official releases in terms of speed. Details can be found in benchmark.
Data Preparation
Please refer to data_preparation.md for a general knowledge of data preparation. The supported datasets are listed in supported_datasets.md
FAQ
Please refer to FAQ for frequently asked questions.
Projects built on MMAction2
Currently, there are many research works and projects built on MMAction2 by users from community, such as:
- Video Swin Transformer. [paper][github]
- Evidential Deep Learning for Open Set Action Recognition, ICCV 2021 Oral. [paper][github]
- Rethinking Self-supervised Correspondence Learning: A Video Frame-level Similarity Perspective, ICCV 2021 Oral. [paper][github]
etc., check projects.md to see all related projects.
Contributing
We appreciate all contributions to improve MMAction2. Please refer to CONTRIBUTING.md in MMCV for more details about the contributing guideline.
Acknowledgement
MMAction2 is an open-source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors who implement their methods or add new features and users who give valuable feedback. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their new models.
Citation
If you find this project useful in your research, please consider cite:
BibTeX
@misc{ paper sss
}
License
This project is released under the Apache 2.0 license.
Owner
- Name: Dr_Hamed
- Login: Hamed-Aghapanah
- Kind: user
- Location: Iran
- Company: Isfahan University Of Medical Science,Iran
- Website: https://www.hamedaghapanah.com/
- Twitter: hamedaghapanah
- Repositories: 2
- Profile: https://github.com/Hamed-Aghapanah
Phd of bioelectrics
GitHub Events
Total
- Watch event: 1
Last Year
- Watch event: 1