Science Score: 44.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (11.7%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

Basic Info
  • Host: GitHub
  • Owner: AffectAI
  • License: apache-2.0
  • Language: Python
  • Default Branch: main
  • Size: 49.6 MB
Statistics
  • Stars: 3
  • Watchers: 2
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created about 1 year ago · Last pushed 12 months ago
Metadata Files
Readme License Citation

README.md

 
# **DepMGNN: Matrixial Graph Neural Network for Video-based Automatic Depression Assessment**

📄 Table of Contents

🥳 🚀 What's New 🔝

👏👏👏Congratulations (2024.12.10): Our work DepMGNN: Matrixial Graph Neural Network for Video-based Automatic Depression Assessment has been accepted by AAAI-2025 and selected for oral presentation!

📖 Introduction 🔝

                

Existing vector-style graph (left) and Our matrixial-style graph (right)

                     

Our clip-level spatio-temporal matrixial graph (left) and the updated matrixial graph by our MGNN (right)

Pipeline of our DepMGNN

🛠️ Installation 🔝

MGNN is built on top of mmaction2 and torch-geometric.

Please refer to their official tutorials for detailed installation instructions.

Quick instructions ```shell conda create -n MGNN python=3.9 -y conda activate MGNN pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 --index-url https://download.pytorch.org/whl/cu124 pip install -U openmim mim install mmengine mim install mmcv==2.1.0 pip install torch-geometric==2.4.0 pip install einops pip install timm pip install seaborn git clone https://github.com/AffectAI/MGNN.git cd MGNN pip install -v -e . ```

👨‍🏫 Get Started 🔝

Step 1: Preparation

  1. Apply for and download the AVEC2013, AVEC2014, and First Impression datasets from their official websites.
  2. Crop the face from original videos use face_detect.py

    Quick instructions

    ```shell pip install pyfacer

    python face_detect.py ```

  3. Place the cropped face frames in the corresponding folder under ./datasets. The corresponding dataset labels have been uploaded to the directory.

  4. Download the pretrained resnet-50 model on vggface2 and put it into ./pretrained_models.

Step 2: Training

```shell

Training on AVEC 2014

bash ./tools/disttrain.sh configs/depression/mgnndepressionavec2014res50.py num_gpus --seed 0

Training on AVEC 2013

bash ./tools/disttrain.sh configs/depression/mgnndepressionavec2013res50.py num_gpus --seed 0

Training on First Impression dataset

bash ./tools/disttrain.sh configs/depression/mgnnpersonalityfirstimpressionres50.py numgpus --seed 0

```

Step 3: Testing

```shell

Testing on AVEC 2014 Northwind and Freeform

bash ./tools/disttest.sh configs/depression/mgnndepressionavec2014res50testfusion.py your/model/path/your_model.pth 1

Testing on AVEC 2013

bash ./tools/disttest.sh configs/depression/mgnndepressionavec2013res50.py your/model/path/your_model.pth 1

Testing on First Impression dataset

bash ./tools/disttest.sh configs/depression/mgnnpersonalityfirstimpressionres50.py your/model/path/yourmodel.pth 1

```

👀 Models 🔝

  1. Pretrained models: vggface2 pretrained resnet-50 model

  2. MGNN AVEC 2014: MGNN (resnet-50)

  3. MGNN AVEC 2013: MGNN (resnet-50)

  4. MGNN First Impression: MGNN (resnet-50)

🙌 Results 🔝

🖊️ Citation 🔝

If you find this project useful in your research, please consider cite:

```BibTeX

@inproceedings{wu2025depmgnn, title={DepMGNN: Matrixial Graph Neural Network for Video-based Automatic Depression Assessment}, author={Wu, Zijian and Zhou, Leijing and Li, Shuanglin and Fu, Changzeng and Lu, Jun and Han, Jing and Zhang, Yi and Zhao, Zhuang and Song, Siyang}, booktitle={Proceedings of the AAAI Conference on Artificial Intelligence}, volume={39}, number={2}, pages={1610--1619}, year={2025} }

```

Owner

  • Name: Affect AI
  • Login: AffectAI
  • Kind: organization
  • Email: affect_ai@outlook.com

Affective computing

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
  - name: "MMAction2 Contributors"
title: "OpenMMLab's Next Generation Video Understanding Toolbox and Benchmark"
date-released: 2020-07-21
url: "https://github.com/open-mmlab/mmaction2"
license: Apache-2.0

GitHub Events

Total
  • Watch event: 9
  • Delete event: 1
  • Member event: 4
  • Push event: 15
  • Create event: 3
Last Year
  • Watch event: 9
  • Delete event: 1
  • Member event: 4
  • Push event: 15
  • Create event: 3

Dependencies

docker/Dockerfile docker
  • pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
docker/serve/Dockerfile docker
  • pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
requirements/build.txt pypi
  • Pillow *
  • decord >=0.4.1
  • einops *
  • matplotlib *
  • numpy *
  • opencv-contrib-python *
  • scipy *
  • torch >=1.3
requirements/docs.txt pypi
  • docutils ==0.18.1
  • einops *
  • modelindex *
  • myst-parser *
  • opencv-python *
  • scipy *
  • sphinx ==6.1.3
  • sphinx-notfound-page *
  • sphinx-tabs *
  • sphinx_copybutton *
  • sphinx_markdown_tables *
  • sphinxcontrib-jquery *
  • tabulate *
requirements/mminstall.txt pypi
  • mmcv >=2.0.0rc4,<2.2.0
  • mmengine >=0.7.1,<1.0.0
requirements/multimodal.txt pypi
  • transformers >=4.28.0
requirements/optional.txt pypi
  • PyTurboJPEG *
  • av >=9.0
  • future *
  • imgaug *
  • librosa *
  • lmdb *
  • moviepy *
  • openai-clip *
  • packaging *
  • pims *
  • soundfile *
  • tensorboard *
  • wandb *
requirements/readthedocs.txt pypi
  • mmcv *
  • titlecase *
  • torch *
  • torchvision *
requirements/tests.txt pypi
  • coverage * test
  • flake8 * test
  • interrogate * test
  • isort ==4.3.21 test
  • parameterized * test
  • pytest * test
  • pytest-runner * test
  • xdoctest >=0.10.0 test
  • yapf * test
requirements.txt pypi
setup.py pypi
tools/data/activitynet/environment.yml conda
  • ca-certificates 2020.1.1.*
  • certifi 2020.4.5.1.*
  • ffmpeg 2.8.6.*
  • libcxx 10.0.0.*
  • libedit 3.1.20181209.*
  • libffi 3.3.*
  • ncurses 6.2.*
  • openssl 1.1.1g.*
  • pip 20.0.2.*
  • python 3.7.7.*
  • readline 8.0.*
  • setuptools 46.4.0.*
  • sqlite 3.31.1.*
  • tk 8.6.8.*
  • wheel 0.34.2.*
  • xz 5.2.5.*
  • zlib 1.2.11.*
tools/data/gym/environment.yml conda
  • ca-certificates 2020.1.1.*
  • certifi 2020.4.5.1.*
  • ffmpeg 2.8.6.*
  • libcxx 10.0.0.*
  • libedit 3.1.20181209.*
  • libffi 3.3.*
  • ncurses 6.2.*
  • openssl 1.1.1g.*
  • pip 20.0.2.*
  • python 3.7.7.*
  • readline 8.0.*
  • setuptools 46.4.0.*
  • sqlite 3.31.1.*
  • tk 8.6.8.*
  • wheel 0.34.2.*
  • xz 5.2.5.*
  • zlib 1.2.11.*
tools/data/hvu/environment.yml conda
  • ca-certificates 2020.1.1.*
  • certifi 2020.4.5.1.*
  • ffmpeg 2.8.6.*
  • libcxx 10.0.0.*
  • libedit 3.1.20181209.*
  • libffi 3.3.*
  • ncurses 6.2.*
  • openssl 1.1.1g.*
  • pip 20.0.2.*
  • python 3.7.7.*
  • readline 8.0.*
  • setuptools 46.4.0.*
  • sqlite 3.31.1.*
  • tk 8.6.8.*
  • wheel 0.34.2.*
  • xz 5.2.5.*
  • zlib 1.2.11.*
tools/data/kinetics/environment.yml conda
  • ca-certificates 2020.1.1.*
  • certifi 2020.4.5.1.*
  • ffmpeg 2.8.6.*
  • libcxx 10.0.0.*
  • libedit 3.1.20181209.*
  • libffi 3.3.*
  • ncurses 6.2.*
  • openssl 1.1.1g.*
  • pip 20.0.2.*
  • python 3.7.7.*
  • readline 8.0.*
  • setuptools 46.4.0.*
  • sqlite 3.31.1.*
  • tk 8.6.8.*
  • wheel 0.34.2.*
  • xz 5.2.5.*
  • zlib 1.2.11.*