mexca

Multimodal Emotion eXpression Capture Amsterdam. Pipeline for capturing emotion expressions from multiple modalities (video, audio, text) in the wild.

https://github.com/mexca/mexca

Science Score: 44.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (16.1%) to scientific vocabulary

Keywords

computer-vision docker emotion-recognition python pytorch sentiment-analysis speech-processing speech-to-text
Last synced: 6 months ago · JSON representation ·

Repository

Multimodal Emotion eXpression Capture Amsterdam. Pipeline for capturing emotion expressions from multiple modalities (video, audio, text) in the wild.

Basic Info
Statistics
  • Stars: 37
  • Watchers: 2
  • Forks: 6
  • Open Issues: 4
  • Releases: 13
Topics
computer-vision docker emotion-recognition python pytorch sentiment-analysis speech-processing speech-to-text
Created over 3 years ago · Last pushed 11 months ago
Metadata Files
Readme Changelog Contributing License Code of conduct Citation

README.dev.md

mexca developer documentation

If you're looking for user documentation, go here.

Development install

```shell

Create a virtual environment, e.g. with

python3 -m venv env

activate virtual environment

source env/bin/activate

make sure to have a recent version of pip and setuptools

python3 -m pip install --upgrade pip setuptools

(from the project root directory)

install mexca as an editable package

python3 -m pip install --no-cache-dir --editable .

install development dependencies

python3 -m pip install --no-cache-dir --editable .[dev] ```

Afterwards check that the install directory is present in the PATH environment variable.

Running the tests

There are two ways to run tests.

The first way requires an activated virtual environment with the development tools installed:

shell pytest -v

The second is to use tox, which can be installed separately (e.g. with pip install tox), i.e. not necessarily inside the virtual environment you use for installing mexca, but then builds the necessary virtual environments itself by simply running:

shell tox

Testing with tox allows for keeping the testing environment separate from your development environment. The development environment will typically accumulate (old) packages during development that interfere with testing; this problem is avoided by testing with tox.

Test coverage

In addition to just running the tests to see if they pass, they can be used for coverage statistics, i.e. to determine how much of the package's code is actually executed during tests. In an activated virtual environment with the development tools installed, inside the package directory, run:

shell coverage run

This runs tests and stores the result in a .coverage file. To see the results on the command line, run

shell coverage report

coverage can also generate output in HTML and other formats; see coverage help for more information.

Running linters locally

For linting we will use prospector and to sort imports we will use isort. Running the linters requires an activated virtual environment with the development tools installed.

```shell

linter

prospector

recursively check import style for the mexca module only

isort --recursive --check-only mexca

recursively check import style for the mexca module only and show

any proposed changes as a diff

isort --recursive --check-only --diff mexca

recursively fix import style for the mexca module only

isort --recursive mexca ```

To fix readability of your code style you can use yapf.

You can enable automatic linting with prospector and isort on commit by enabling the git hook from .githooks/pre-commit, like so:

shell git config --local core.hooksPath .githooks

Generating the API docs

shell cd docs make html

The documentation will be in docs/_build/html

If you do not have make use

shell sphinx-build -b html docs docs/_build/html

To find undocumented Python objects run

shell cd docs make coverage cat _build/coverage/python.txt

To test snippets in documentation run

shell cd docs make doctest

Versioning

Bumping the version across all files is done with bumpversion, e.g.

shell bumpversion major bumpversion minor bumpversion patch

Making a release

This section describes how to make a release in 3 parts:

  1. preparation
  2. making a release on PyPI
  3. making a release on GitHub

(1/3) Preparation

  1. Update the (don't forget to update links at bottom of page)
  2. Verify that the information in CITATION.cff is correct, and that .zenodo.json contains equivalent data
  3. Make sure the version has been updated.
  4. Run the unit tests with pytest -v

(2/3) PyPI

In a new terminal, without an activated virtual environment or an env directory:

```shell

prepare a new directory

cd $(mktemp -d mexca.XXXXXX)

fresh git clone ensures the release has the state of origin/main branch

git clone https://github.com/mexca/mexca .

prepare a clean virtual environment and activate it

python3 -m venv env source env/bin/activate

make sure to have a recent version of pip and setuptools

python3 -m pip install --upgrade pip setuptools

install runtime dependencies and publishing dependencies

python3 -m pip install --no-cache-dir . python3 -m pip install --no-cache-dir .[publishing]

clean up any previously generated artefacts

rm -rf mexca.egg-info rm -rf dist

create the source distribution and the wheel

python3 setup.py sdist bdist_wheel

upload to test pypi instance (requires credentials)

twine upload --repository-url https://test.pypi.org/legacy/ dist/* ```

Visit https://test.pypi.org/project/mexca and verify that your package was uploaded successfully. Keep the terminal open, we'll need it later.

In a new terminal, without an activated virtual environment or an env directory:

```shell cd $(mktemp -d mexca-test.XXXXXX)

prepare a clean virtual environment and activate it

python3 -m venv env source env/bin/activate

make sure to have a recent version of pip and setuptools

pip install --upgrade pip setuptools

install from test pypi instance:

python3 -m pip -v install --no-cache-dir \ --index-url https://test.pypi.org/simple/ \ --extra-index-url https://pypi.org/simple mexca ```

Check that the package works as it should when installed from pypitest.

Then upload to pypi.org with:

```shell

Back to the first terminal,

FINAL STEP: upload to PyPI (requires credentials)

twine upload dist/* ```

(3/3) GitHub

Don't forget to also make a release on GitHub. If your repository uses the GitHub-Zenodo integration this will also trigger Zenodo into making a snapshot of your repository and sticking a DOI on it.

Owner

  • Name: Multimodal Emotion Expression Capture Amsterdam (MEXCA)
  • Login: mexca
  • Kind: organization
  • Email: m.luken@esciencecenter.nl
  • Location: Netherlands

Citation (CITATION.cff)

# YAML 1.2
---
cff-version: "1.0.4"
title: "mexca: Capture emotion expressions from multiple modalities in videos"
authors:
  - family-names: Lüken
    given-names: Malte
    orcid: "https://orcid.org/0000-0001-7095-203X"
  - family-names: Viviani
    given-names: Eva
    orcid: "https://orcid.org/0000-0002-1330-0585"
  - family-names: Moodley
    given-names: Kody
    orcid: "https://orcid.org/0000-0001-5666-1658"
  - family-names: Pipal
    given-names: Christian
    orcid: "https://orcid.org/0000-0002-5395-2035"
  - family-names: Schumacher
    given-names: Gijs
    orcid: "https://orcid.org/0000-0002-6503-4514"
date-released: 2024-05-01
doi: 10.5281/zenodo.6976414
version: "1.0.4"
repository-code: "https://github.com/mexca/mexca"
keywords:
  - emotion
  - multimodal
  - expression
message: "If you use this software, please cite it using these metadata."
license: Apache-2.0

GitHub Events

Total
  • Watch event: 5
  • Delete event: 1
  • Issue comment event: 2
  • Push event: 12
  • Pull request event: 4
  • Create event: 3
Last Year
  • Watch event: 5
  • Delete event: 1
  • Issue comment event: 2
  • Push event: 12
  • Pull request event: 4
  • Create event: 3

Committers

Last synced: almost 3 years ago

All Time
  • Total Commits: 426
  • Total Committers: 5
  • Avg Commits per committer: 85.2
  • Development Distribution Score (DDS): 0.164
Top Committers
Name Email Commits
maltelueken m****n@a****e 356
Eva Viviani e****i@E****l 59
Eva Viviani e****i@m****m 9
NLeSC Python template n****e@u****m 1
Dafne van Kuppevelt d****t@e****l 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 20
  • Total pull requests: 111
  • Average time to close issues: 3 months
  • Average time to close pull requests: 5 days
  • Total issue authors: 6
  • Total pull request authors: 5
  • Average comments per issue: 0.95
  • Average comments per pull request: 1.27
  • Merged pull requests: 99
  • Bot issues: 5
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 6
  • Average time to close issues: N/A
  • Average time to close pull requests: 31 minutes
  • Issue authors: 0
  • Pull request authors: 1
  • Average comments per issue: 0
  • Average comments per pull request: 0.5
  • Merged pull requests: 3
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • maltelueken (9)
  • github-actions[bot] (5)
  • n400peanuts (2)
  • jiqicn (1)
  • f-hafner (1)
  • cwmeijer (1)
Pull Request Authors
  • maltelueken (108)
  • n400peanuts (10)
  • kodymoodley (3)
  • f-hafner (2)
  • dafnevk (2)
Top Labels
Issue Labels
enhancement (5) bug (3) documentation (2)
Pull Request Labels
documentation (1)

Packages

  • Total packages: 1
  • Total downloads:
    • pypi 81 last-month
  • Total docker downloads: 355
  • Total dependent packages: 0
  • Total dependent repositories: 0
  • Total versions: 11
  • Total maintainers: 1
pypi.org: mexca

Emotion expression capture from multiple modalities.

  • Versions: 11
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 81 Last month
  • Docker Downloads: 355
Rankings
Dependent packages count: 6.6%
Forks count: 17.3%
Average: 20.1%
Stargazers count: 20.5%
Downloads: 25.3%
Dependent repos count: 30.6%
Maintainers (1)
Last synced: 6 months ago

Dependencies

.github/workflows/build.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v3 composite
.github/workflows/cffconvert.yml actions
  • actions/checkout v3 composite
  • citation-file-format/cffconvert-github-action main composite
.github/workflows/docker.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v3 composite
  • docker/build-push-action v3 composite
  • docker/setup-buildx-action v2 composite
  • tj-actions/branch-names v6 composite
.github/workflows/documentation.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v2 composite
.github/workflows/markdown-link-check.yml actions
  • actions/checkout v3 composite
  • gaurav-nelson/github-action-markdown-link-check v1 composite
.github/workflows/release.yml actions
  • actions/checkout v3 composite
  • actions/download-artifact v3 composite
  • actions/setup-python v3 composite
  • actions/upload-artifact v3 composite
  • pypa/gh-action-pypi-publish v1.4.2 composite
.github/workflows/sonarcloud.yml actions
  • SonarSource/sonarcloud-github-action master composite
  • actions/checkout v3 composite
  • actions/setup-python v3 composite
docker/audio-transcriber/Dockerfile docker
  • python 3.9-slim build
docker/face-extractor/Dockerfile docker
  • python 3.9-slim build
docker/sentiment-extractor/Dockerfile docker
  • python 3.9-slim build
docker/speaker-identifier/Dockerfile docker
  • python 3.9-slim build
docker/voice-extractor/Dockerfile docker
  • python 3.9-slim build
requirements.txt pypi
  • docker ==6.0.1
  • facenet-pytorch ==2.5.2
  • intervaltree ==3.1.0
  • moviepy >=1.0.3
  • numpy ==1.21.6
  • openai-whisper *
  • pandas ==1.3
  • praat-parselmouth ==0.4.1
  • protobuf ==3.20
  • py-feat ==0.5.0
  • pyannote.audio ==2.1.1
  • pyannote.core >=4.4,<5.0
  • scipy ==1.7.3
  • sentencepiece *
  • spectralcluster ==0.2.5
  • srt ==3.5.2
  • stable-ts *
  • torch ==1.12.0
  • torchvision ==0.13.0
  • tqdm >=4.64.0
  • transformers ==4.19.2